SilverNode»Blog

devlog #002: New effects! HSL, Local Contrast, Shadows & Highlights, ...

SilverNode devlog #002 is out!

In an attempt to have a broader photo set, covering more types of photography, featuring landscapes, nature, and people from different countries and cultures, we collected some photos from SignatureEdits > Free RAW Photos, which provides free RAW images for any purpose; so a big thank you to those kind folks for hosting, and everyone submitting awesome photos to that website.

The new features covered in this devlog are:

- HSL panels: create curves to shift hue, change saturation, and luminance as a function of the hue.
- Exposure zones: shadows and highlights: implemented as a curve as well.
- Experimenting with contrast locality effects.
- Before and after toggle animation for individual effect groups, or the entire edit.

Edited by Dawoodoz on
I would still stick with basic gamma curve and white balance, because it will always look natural. Separate saturation for different hue makes photos look clearly fake.

You also need to make sure that UV scales with changes in luma better. It's a fine balance between washed out luma addition with fixed UV balance, and RGB multiplication that amplifies additive light to the extreme when making dark regions bright. Preserving natural looking skin tones is especially difficult when changing contrast.

My preference would be to hide all tools that might cause an unnatural look under advanced options for those seeing an artistic effect and show beginners the "hard to mess up" options with more automation and less fine control using basic sliders.
Martijn Courteaux,
Dawoodoz
You also need to make sure that UV scales with changes in luma better. It's a fine balance between washed out luma addition with fixed UV balance, and RGB multiplication that amplifies additive light to the extreme when making dark regions bright. Preserving natural looking skin tones is especially difficult when changing contrast.


By, UV, you mean chromaticity? In the YUV color model, which is a linear transform from an RGB space, it gives very unnatural looks when changing Y without changing UV, I think: saturation would be all over the place. The way the shadows and highlights work right now, is indeed by simply multiplying the RGB tristimulus. You seem to know what you are talking about. Any good resources you are aware of that go into more depth on this topic? We can also discuss this on Discord in more depth if you feel like it!

Dawoodoz
My preference would be to hide all tools that might cause an unnatural look under advanced options for those seeing an artistic effect and show beginners the "hard to mess up" options with more automation and less fine control using basic sliders.


Currently, the order of the sliders is in the order of the effects being applied. I think I'd like to keep it that way for now. Once I'm more convinced how each effect should behave, I might consider redesigning some of the UI components. But you are totally right that intuitive UI is important.
Miles,
Hiding "advanced" UI is an antipattern for wimps. I think it's fair to say this application is aimed at people who are either already knowledgeable about color grading, or are looking to become knowledgeable, and who want to be able to make fairly specific and potentially arbitrary edits. Hiding large portions of the UI by default is just an annoyance to those people. If you wanted to make a foolproof application for people who are not interested in that level of control, that would be best realized as a different application entirely, with likewise entirely different UI idioms.
Edited by Dawoodoz on
mcourteaux

By, UV, you mean chromaticity? In the YUV color model, which is a linear transform from an RGB space, it gives very unnatural looks when changing Y without changing UV, I think: saturation would be all over the place. The way the shadows and highlights work right now, is indeed by simply multiplying the RGB tristimulus. You seem to know what you are talking about. Any good resources you are aware of that go into more depth on this topic? We can also discuss this on Discord in more depth if you feel like it!


Yes, UV is the absolute differences between channels used in compression.

I worked full time writing contrast optimizing camera firmware and can only share the publicly known methods. The best source was articles in photography from National Geographic. You can also find scientific papers in color theory using Google schollar.

See the final color as a first degree polynomial (while ignoring non-linear Gamma). PhysicalIntensity = DiffuseColor * DiffuseLight + Specular + StrayRadiation(removed in image signal processor) + LensFlare. Then an RGB multiplication will have a strange effect on the latter added parts. Your task is then to identify what's the additive part and what is a part of the multiplication (can be hinted from local histograms as a relation between intensity, saturation and variation) before compensating saturation accordingly. If an evenly colored surface stays even, then you solved the equation.
Martijn Courteaux,
Dawoodoz
See the final color as a first degree polynomial (while ignoring non-linear Gamma). PhysicalIntensity = DiffuseColor * DiffuseLight + Specular + StrayRadiation(removed in image signal processor) + LensFlare. Then an RGB multiplication will have a strange effect on the latter added parts. Your task is then to identify what's the additive part and what is a part of the multiplication (can be hinted from local histograms as a relation between intensity, saturation and variation) before compensating saturation accordingly. If an evenly colored surface stays even, then you solved the equation.


Sounds very interesting, but I after reading what you wrote like 10 times, I still don't understand why the uniform multiplication of the RGB values would cause weird effects on speculars and lens flares? Every pixel will at all times have the same color, with just different luminance.

However, I'm very interested to learning more. AFAIK, Lightroom doesn't even do this. Do you know if Lightroom uses such techniques? The stack of local Laplacians does clearly not do what you describe. The problem I have right now is that I don't even know what I would be looking for. What is the advantage of this seperating the two terms? Do you have visual examples of what is -- according to you -- a good and a bad way of doing this? Also, does this have scientific name that would give me a start for looking through literature?
The problem arises because you want to restore contrast rather than preserve the washed out colors to represent what a human observer would perceive without a foggy lens (eyelids keep your eyes clean) and without lens flares (your reflexes avoid hard light).

In image signal processors, we call this filter "black point", which is the standard term among hardware vendors. A part of the light that enters a camera is unwanted direct radiation, but most of it is removed by subtracting intensity uniformly across all channels in a chip behind the CMOS/CCD sensor. An original raw image before ISP processing is often so washed out that you can barely see the colors, but local spots of this can remain because the operation is often global to preserve something true to original content.

Perceived saturation is the relative difference between RGB channels, but when a uniform color has been added from a subtle beam entering the camera directly and bouncing around, that relation between channel sum and channel difference has changed. If the original color was clear blue, the surface was in a shadow, and a flare adds yellow, you end up with an entirely different color. Subtracting yellow would then bring back that surface's original color as a human observer blocking the flare would perceive it.