先上下效果图:
中间的是原图,背景是效果
系统的两种方法
1.iOS 7之前系统的类提供UIToolbar来实现毛玻璃效果:
- (void)toolbarStyle{ let toolRect = CGRect.init(x: 0, y: 0, width: ScreenWidth/2, height: ScreenHeight) let toolBar = UIToolbar.init(frame: toolRect) toolBar.barStyle = .black textImage.addSubview(toolBar) }
2.iOS 8之后苹果新增加了一个类UIVisualEffectView,通过这个类来实现毛玻璃效果:
/* NS_ENUM_AVAILABLE_IOS(8_0) * UIBlurEffectStyleExtraLight,//额外亮度,(高亮风格) * UIBlurEffectStyleLight,//亮风格 * UIBlurEffectStyleDark,//暗风格 * UIBlurEffectStyleExtraDark __TVOS_AVAILABLE(10_0) __IOS_PROHIBITED __WATCHOS_PROHIBITED, * UIBlurEffectStyleRegular NS_ENUM_AVAILABLE_IOS(10_0), // Adapts to user interface style * UIBlurEffectStyleProminent NS_ENUM_AVAILABLE_IOS(10_0), // Adapts to user interface style */ lazy var effectView: UIVisualEffectView = { let effect = UIBlurEffect.init(style: .light) let temp = UIVisualEffectView.init(effect: effect) self.topBackView.addSubview(temp) return temp }()
coreImage
该方法实现的模糊效果较好,模糊程度的可调范围很大,可以根据实际的需求随意调试。缺点就是耗时,我在模拟器上跑需要1-2秒的时间,所有该方法需要放在子线程中执行
dispatch_async(dispatch_get_global_queue(0, 0), ^{ CIContext *context = [CIContext contextWithOptions:nil]; CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage]; CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"]; [filter setValue:ciImage forKey:kCIInputImageKey]; //设置模糊程度 [filter setValue:@30.0f forKey: @"inputRadius"]; CIImage *result = [filter valueForKey:kCIOutputImageKey]; CGRect frame = [ciImage extent]; NSLog(@"%f,%f,%f,%f",frame.origin.x,frame.origin.y,frame.size.width,frame.size.height); CGImageRef outImage = [context createCGImage: result fromRect:ciImage.extent]; UIImage * blurImage = [UIImage imageWithCGImage:outImage]; dispatch_async(dispatch_get_main_queue(), ^{ coreImgv.image = blurImage; }); });
vImage
vImage属于Accelerate.Framework,需要导入 Accelerate下的 Accelerate头文件, Accelerate主要是用来做数字信号处理、图像处理相关的向量、矩阵运算的库。图像可以认为是由向量或者矩阵数据构成的,Accelerate里既然提供了高效的数学运算API,自然就能方便我们对图像做各种各样的处理 ,模糊算法使用的是vImageBoxConvolve_ARGB8888这个函数
//使用前需要导入头文件 #import+ (UIImage *)boxblurImage:(UIImage *)image withBlurNumber:(CGFloat)blur { if (blur < 0.f || blur > 1.f) { blur = 0.5f; } int boxSize = (int)(blur * 40); boxSize = boxSize - (boxSize % 2) + 1; CGImageRef img = image.CGImage; vImage_Buffer inBuffer, outBuffer; vImage_Error error; void *pixelBuffer; //从CGImage中获取数据 CGDataProviderRef inProvider = CGImageGetDataProvider(img); CFDataRef inBitmapData = CGDataProviderCopyData(inProvider); //设置从CGImage获取对象的属性 inBuffer.width = CGImageGetWidth(img); inBuffer.height = CGImageGetHeight(img); inBuffer.rowBytes = CGImageGetBytesPerRow(img); inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData); pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img)); if(pixelBuffer == NULL) NSLog(@"No pixelbuffer"); outBuffer.data = pixelBuffer; outBuffer.width = CGImageGetWidth(img); outBuffer.height = CGImageGetHeight(img); outBuffer.rowBytes = CGImageGetBytesPerRow(img); error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend); if (error) { NSLog(@"error from convolution %ld", error); } CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef ctx = CGBitmapContextCreate( outBuffer.data, outBuffer.width, outBuffer.height, 8, outBuffer.rowBytes, colorSpace, kCGImageAlphaNoneSkipLast); CGImageRef imageRef = CGBitmapContextCreateImage (ctx); UIImage *returnImage = [UIImage imageWithCGImage:imageRef]; //clean up CGContextRelease(ctx); CGColorSpaceRelease(colorSpace); free(pixelBuffer); CFRelease(inBitmapData); CGColorSpaceRelease(colorSpace); CGImageRelease(imageRef); return returnImage; }
源码已经上传至gitHub: