- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
How do you get a sensor data image from a camera?
That’s crazy.
Good read. Funny how I always thought the sensor read rgb, instead of simple light levels in a filter pattern.
For a while the best/fanciest digital cameras had three CCDs, one for each RGB color channel. I’m not sure if that’s still the case or if the color filter process is now good enough to replace it.
wild how far technology has marched on and yet we’re still essentially using the same basic idea behind technicolor. but hey, if it works!
This is why I don’t say I need to edit my photos, but instead I need to process them. Editing is clearly understood by the layperson as Photoshop and while they don’t understand processing necessarily, many people still understand taking photos to a store and getting them processed from the film to a photo they can give someone.
I’m a little confused on how the demosaicing step produced a green-tinted photo. I understand that there are 2x green pixels, but does the naive demosaic process just show the averaged sensor data which would intrinsically have “too much” green, or was there an error with the demosaicing?
Yes, given the comment about averaging with the neighbours green will be overrepresented in the average. An additional (smaller) factor is that the colour filters aren’t perfect, and green in particular often has some signficant sensitivity to wavelengths that the red and blue colour filters are meant to pick up.
edit: One other factor I forgot, green photosites are often more sensitive than the red and blue photosites.
Plus human eye is more sensitive to green than other channels
Is this you? If so, my wife wonders what camera and software you used!





