referred to this question
提到这个问题
i convert the yv12 frame data to rgb data using glsl shader,the raw image below:
我使用glsl着色器将yv12帧数据转换为rgb数据,原始图像如下:
but the result image is not same with the former,attached below:
但结果图像与前者不相同,如下图所示:
following is my code for uploading the three planar data to textures:
以下是我上传三个平面数据到纹理的代码:
- (GLuint) textureY: (Byte*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glGenTextures( 1, &texName );
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, imageData );
//free(imageData);
return texName;
}
- (GLuint) textureU: (Byte*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glGenTextures( 1, &texName );
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, imageData );
//free(imageData);
return texName;
}
- (GLuint) textureV: (Byte*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glGenTextures( 1, &texName );
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, imageData );
//free(imageData);
return texName;
}
- (void) readYUVFile
{
NSString *file = [[NSBundle mainBundle] pathForResource:@"video" ofType:@"yv12"];
NSLog(@"%@",file);
NSData* fileData = [NSData dataWithContentsOfFile:file];
//NSLog(@"%@",[fileData description]);
NSInteger width = 352;
NSInteger height = 288;
NSInteger uv_width = width / 2;
NSInteger uv_height = height / 2;
NSInteger dataSize = [fileData length];
NSLog(@"%i\n",dataSize);
GLint nYsize = width * height;
GLint nUVsize = uv_width * uv_height;
GLint nCbOffSet = nYsize;
GLint nCrOffSet = nCbOffSet + nUVsize;
Byte *spriteData = (Byte *)malloc(dataSize);
[fileData getBytes:spriteData length:dataSize];
Byte* uData = spriteData + nCbOffSet;
//NSLog(@"%@\n",[[NSData dataWithBytes:uData length:nUVsize] description]);
Byte* vData = spriteData + nCrOffSet;
//NSLog(@"%@\n",[[NSData dataWithBytes:vData length:nUVsize] description]);
/**
Byte *YPlanarData = (Byte *)malloc(nYsize);
for (int i=0; i
and my fragment shaders code:
我的片段着色器代码:
precision highp float;
uniform sampler2D SamplerY;
uniform sampler2D SamplerU;
uniform sampler2D SamplerV;
varying highp vec2 coordinate;
void main()
{
highp vec3 yuv,yuv1;
highp vec3 rgb;
yuv.x = texture2D(SamplerY, coordinate).r;
yuv.y = texture2D(SamplerU, coordinate).r-0.5;
yuv.z = texture2D(SamplerV, coordinate).r-0.5 ;
rgb = mat3( 1, 1, 1,
0, -.34414, 1.772,
1.402, -.71414, 0) * yuv;
gl_FragColor = vec4(rgb, 1);
}
my confusion is the conversion formula while i using this formula directly converting the yv12 data to rgb24,and draw a image with the
我的混淆是转换公式,我使用这个公式直接将yv12数据转换为rgb24,并绘制一个图像。
CGImageCreate(iwidth,
iheight,
8,
24,
iwidth*3,
colorSpace,
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
the result image is correct. but using the shader (for the direct transform approach running on iOS device is dump) turns to this problem ,i've tried some tricks(expand the UV planers to (2*uv_width)*2(uv_height) rectangle and then upload the texture),but failed in the same more red image.
结果图像是正确的。但是使用shader(在iOS设备上运行的直接转换方法是dump)转向这个问题,我尝试了一些技巧(将UV planers扩展到(2*uv_width)*2(uv_height)矩形,然后上传纹理),但是在相同的红色图像中失败了。
How to resolve this issue?
如何解决这个问题?
attached with my whole glView.m code:
连同我的整个glView。m代码:
#import "OpenGLView.h"
typedef struct {
float Position[3];
float TexCoord[2];
} Vertex;
const Vertex Vertices[] = {
{{1, -1, 0},{1,1}},
{{1, 1, 0},{1,0}},
{{-1, 1, 0},{0,0}},
{{-1, -1, 0},{0,1}}
};
const GLubyte Indices[] = {
0, 1, 2,
2, 3, 0
};
@interface OpenGLView ()
- (void)setupLayer;
- (void)setupContext;
- (void)setupRenderBuffer;
- (void)setupFrameBuffer;
- (void)render;
- (GLuint)compileShader:(NSString*)shaderName withType:(GLenum)shaderType;
- (void)setupVBOs;
- (void)compileShaders;
- (void) readYUVFile;
@end
@implementation OpenGLView
- (void)setupVBOs {
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
GLuint indexBuffer;
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
}
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code[]
self.backgroundColor = [UIColor redColor];
[self setupLayer];
[self setupContext];
[self setupRenderBuffer];
[self setupFrameBuffer];
[self setupVBOs];
[self compileShaders];
[self readYUVFile];
[self render];
}
return self;
}
+ (Class)layerClass{
return [CAEAGLLayer class];
}
-(void)setupLayer{
_eaglLayer = (CAEAGLLayer *)self.layer;
_eaglLayer.opaque = YES;
}
- (void)setupContext{
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;
_cOntext= [[[EAGLContext alloc] initWithAPI:api] autorelease];
if (!_context) {
NSLog(@"Failed to initialize OpenGLES 2.0 context");
exit(1);
}
if (![EAGLContext setCurrentContext:_context]) {
NSLog(@"Failed to set current OpenGL context");
exit(1);
}
}
- (void)setupRenderBuffer {
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
}
- (void)setupFrameBuffer {
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, _colorRenderBuffer);
}
- (GLuint) textureY: (Byte*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glGenTextures( 1, &texName );
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, imageData );
//free(imageData);
return texName;
}
- (GLuint) textureU: (Byte*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glGenTextures( 1, &texName );
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RED_EXT, width, height, 0, GL_RED_EXT, GL_UNSIGNED_BYTE, imageData );
//free(imageData);
return texName;
}
- (GLuint) textureV: (Byte*)imageData
widthType: (int) width
heightType: (int) height
{
GLuint texName;
glGenTextures( 1, &texName );
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RED_EXT, width, height, 0, GL_RED_EXT, GL_UNSIGNED_BYTE, imageData );
//free(imageData);
return texName;
}
- (void) readYUVFile
{
NSString *file = [[NSBundle mainBundle] pathForResource:@"video" ofType:@"yv12"];
NSLog(@"%@",file);
NSData* fileData = [NSData dataWithContentsOfFile:file];
//NSLog(@"%@",[fileData description]);
NSInteger width = 352;
NSInteger height = 288;
NSInteger uv_width = width / 2;
NSInteger uv_height = height / 2;
NSInteger dataSize = [fileData length];
NSLog(@"%i\n",dataSize);
GLint nYsize = width * height;
GLint nUVsize = uv_width * uv_height;
GLint nCbOffSet = nYsize;
GLint nCrOffSet = nCbOffSet + nUVsize;
Byte *spriteData = (Byte *)malloc(dataSize);
[fileData getBytes:spriteData length:dataSize];
Byte* uData = spriteData + nCbOffSet;
//NSLog(@"%@\n",[[NSData dataWithBytes:uData length:nUVsize] description]);
Byte* vData = spriteData + nCrOffSet;
//NSLog(@"%@\n",[[NSData dataWithBytes:vData length:nUVsize] description]);
Byte *YPlanarData = (Byte *)malloc(nYsize);
for (int i=0; i
2
my foolish mistake,the three uploading-texture process and fragment shader are correct ,but the following code are not compatible:
我的愚蠢错误,三个上传纹理的过程和片段着色器是正确的,但是下面的代码是不兼容的:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _YPlanarTexture);
glUniform1i(_textureUniformY, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _UPlanarTexture);
glUniform1i(_textureUniformU, 1);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _VPlanarTexture);
glUniform1i(_textureUniformV, 2);
and below:
和下面的:
_YPlanarTexture = glGetUniformLocation(programHandle, "SamplerY");
_UPlanarTexture = glGetUniformLocation(programHandle, "SamplerU");
_VPlanarTexture = glGetUniformLocation(programHandle, "SamplerV");
so replace with this:
所以替换为:
_textureUniformY = glGetUniformLocation(programHandle, "SamplerY");
_textureUniformU = glGetUniformLocation(programHandle, "SamplerU");
_textureUniformV = glGetUniformLocation(programHandle, "SamplerV");
then it will do right thing .
这样做是对的。