- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
Good read. Funny how I always thought the sensor read rgb, instead of simple light levels in a filter pattern.
wild how far technology has marched on and yet we’re still essentially using the same basic idea behind technicolor. but hey, if it works!
I’m a little confused on how the demosaicing step produced a green-tinted photo. I understand that there are 2x green pixels, but does the naive demosaic process just show the averaged sensor data which would intrinsically have “too much” green, or was there an error with the demosaicing?
Is this you? If so, my wife wonders what camera and software you used!
This is why I don’t say I need to edit my photos, but instead I need to process them. Editing is clearly understood by the layperson as Photoshop and while they don’t understand processing necessarily, many people still understand taking photos to a store and getting them processed from the film to a photo they can give someone.



