With the default preset, modules are organized into three functional groups: technical, grading and effects. You either view all modules in one long list or instead click on a group to just display modules belonging to that group.
The technical group contains every module bounded by some physical reality: lens, sensor, in/out color spaces and various signal reconstructions (highlights, noise, etc.)
This module eliminates some of the typical banding artifacts which can occur, when darktable's internal 32-bit floating point data are transferred into a discrete 8-bit or 16-bit integer output format for display or file export. |
Banding is a problem which can arise, when an image is downsampled into a lower bit-depth. Downsampling happens regularly, when darktable displays or exports the results of a pixelpipe. In order to avoid banding, you may activate this module. As dithering consumes significant resources this module is disabled by default.
Although banding is not an inherent problem of any of darktable's modules, some operations may provoke it as they produce a lightness gradient in the image. To mitigate possible artifacts you should consider to activate dithering when using the vignette and the graduated density module, respectively (see Section 3.4.3.3, “Vignetting” and Section 3.4.2.19, “Graduated Density”). This is especially relevant for images with extended homogeneous areas like cloudless sky. Also when using a gradient mask (see the section called “gradient”) you should watch out for possible banding artifacts.
Viewing from some distance an image dithered into a very low bit depth (like “floyd-steinberg 1-bit b&w”) will give the impression of a homogeneous grayscale image. We try to mimic this impression in darktable when you look at zoomed-out images in the center view, in the navigation window and for thumbnails. This is accomplished by dithering those images into a higher number of grayscale levels. Note that as a consequence the histogram – which is derived from the navigation window – will show this increased number of levels and is no longer a full match of the output image.
This combobox sets the dithering method. Floyd-Steinberg error diffusion – with some typical output bit depths – and random noise dithering are both supported. Floyd-Steinberg systematically distributes quantization errors over neighboring pixels, whereas random dithering just adds some level of randomness to break sharp tonal value bands. The default setting is “floyd-steinberg auto”, which automatically adapts to the desired output format.
The visibility of the following examples depends on the quality of your monitor or the print quality.
Banding artifact caused by vignetting (100% crop of a 8-bit PNG; effect heavily exaggerated by strong contrast enhancement). | |
The same image area, processed as above but with activated Floyd-Steinberg dithering. |
This module manages the output profiles for export as well as the rendering intent to be used when mapping between the different color spaces.
darktable comes with pre-defined profiles sRGB, AdobeRGB, XYZ and linear RGB but
you can provide additional profiles by placing these in
|
You can define the output color profile in two different places, either in this module, or in the export panel in lighttable mode (see Section 2.3.14, “Export selected”).
Sets the rendering intent for output/export. For more details see Section 3.2.6.3, “Rendering intent”).
Only rendering with LittleCMS2 gives you a choice of rendering intent. The option is hidden, if darktable's internal rendering routines are used. Rendering with LittleCMS2 is activated in the preferences dialog (see Section 8.6, “Processing”).
Sets the color profile for output/export, causing darktable to render colors with this profile. darktable embeds the profile data into the output file if supported by the file format – this allows other applications reading the file to correctly interpret its colors.
As not all applications, e.g. image viewers, are aware of color profiles, a general recommendation is to stick to sRGB as the default output profile. You should only deviate from sRGB if this is really required and if you know what you are doing.
Due to the nature of digital sensors, overexposed highlights are lacking valid color information. Most frequently they appear neutral white or exhibit some color cast – depending on which other image processing steps are involved. This module can “heal” overexposed highlights by replacing their colors with better fitting ones. The module acts on highlight pixels whose luminance values exceed a user defined threshold. Replacement colors are taken from the neighborhood. Both, the spatial distance and the luminance distance (range) are taken into account for color selection.
As a limitation of the underlying algorithm reconstructed colors may sometimes be displayed incorrectly if you zoom into the image in the darkroom view. If this happens you might observe a magenta shift in highlight areas close to high contrast edges, or you might see colorless highlight areas if you combine this module with the “reconstruct color” method of the “highlight reconstruction” module (see Section 3.4.1.27, “Highlight reconstruction”). These artifacts only influence image display – the final output remains unaffected. It is recommended that you finetune the parameters of this module while viewing the full, not zoomed-in image.
The color reconstruction module replaces the color of all target pixels characterized by luminance values above this threshold. Inversely, only pixels with luminance values below this threshold are taken as valid source pixels for replacement colors. Too high settings of this parameter will cause the module to have no effect on any pixels. Too low values will minimize the “pool” of replacement colors – if no suited ones are available the original colors are maintained. Therefore, this parameter exhibits a “sweet spot” characteristic with an optimum setting depending on the individual image.
Defines the spatial distance (x,y-coordinates) that source pixels may have from a target pixel in order for them to contribute to color replacement. Higher values cause ever more distant pixels to contribute; this increases the chance to find a replacement color but makes that color more undefined and less clear.
Defines the range distance (difference in luminance values) that source pixels may have from target pixels in order for them to contribute to color replacement. Higher values cause more pixels to contribute even if their luminance differs more strongly from the target pixels; this again increases the chance to find a replacement color but at the same time increases the risk of unfitting colors creeping in.
This combobox defines if certain replacement colors shall take precedence over others. In its default setting “off ” all pixels contribute equally. Setting it to “saturated colors” makes pixels contribute according to their chromaticity – the higher saturated a color the more it contributes. By selecting “hue” you get a choice of giving precedence to a specific hue.
This slider is visible if you set the preference combobox to “hue”. It allows you to select a preferred hue of replacement colors. This only has an effect if the preferred hue is actually present within the selected spatial and range distance of the target pixels (see above). A typical use case is repairing highlights on human skin in situations where diverging colors are in close proximity (e.g. textiles or hair with a luminance close to skin). Setting a hue preference on skin tones avoids these other colors from creeping in.
This module compresses the tonal range of an image by reproducing the tone and color response of classic film. Doing so, it protects the colors and the contrast in mid-tones, recovers the shadows, and compresses bright highlights. It is very suitable in portrait photography, especially in back-lighting situations, but needs extra care when details need to be preserved in highlights (e.g. clouds). The module is derived from the same named module in Blender 3D modeller by T. J. Sobotka. While it is primarily intended to recover high-dynamic range from raw sensor data, it can be used with any image in replacement of the base curve module. The developer provided a detailed explanation of the module in a video called Filmic RGB: remap any dynamic range in darktable 3.0. Filmic rgb is the successor of the filmic module provided in darktable 2.6.x. While the underlying principles did not change much, users of the previous version should not expect a 1:1 translation of their workflow, and may find the section called “Filmic rgb for darktable 2.6 filmic users” useful. |
In order to get the best out of filmic rgb, images need some preparation:
In-camera, expose the shot “to the right”. This implies under exposing the shot so that the highlights are at the right of the histogram, just on the verge of clipping, but not clipped. It does not matter if the picture preview is very dark on your camera screen: as long as highlights are unclipped, filmic rgb should be able to recover details from the raw data. Beware that clipped data are not recoverable. Some cameras have a clipping alert preview to help you diagnose this, and some even have an highlight-priority exposure mode.
In the exposure module, push the exposure until the midtones are clear enough. Do not worry about losing the highlights: they will be recovered as part of the filmic rgb processing. However, it is important to avoid negative pixels in black areas: the computations done by filmic rgb will result in unpredictable results in this case. For some cameras models (Canon, mainly), rawspeed (the raw decoding library of darktable), may set an exaggerated black level, resulting in crushed blacks and negative values. If so, brighten the blacks by setting a negative black level value in the exposure module.
If you plan on using filmic rgb's auto-tuners, use the white balance module first to correct any color casts and get neutral colors. In RGB color spaces, luminance and chrominance are linked, and filmic rgb's luminance detection relies on accurate measures of both. If your picture is very noisy, add an initial step of denoising to help the black exposure readings, and use a high quality demosaicing.
If you plan on using filmic rgb's chrominance preservation mode, avoid using any tone mapping module as well as the base curve module. These may produce unpredictable color shifts that would make the chrominance preservation useless. Neither of these modules is usually needed if you use filmic rgb.
The filmic rgb module aims at mapping the photographed scene (RAW image) dynamic range to the (smaller) display dynamic range. This mapping is defined in three steps, each handled in a separate tab in the interface:
The scene tab contains the “input” settings of the scene: what constitutes middle grey, white and black in the photographed scene.
The look tab contains the parameters of the mapping applied to the input parameters defined in the scene tab. Notably, this part of the module applies an S-shaped parametric curve to enhance the contrast and remap the grey value to the middle grey of the display. This is similar to what the base curve or tone curve modules do.
The display tab defined the output settings to map the transformed image to the display. In typical use cases, this tab should only be used very rarely.
The sliders' ranges of filmic rgb are limited to usual and safe values, but values are allowed out of these ranges by clicking on the sliders with the right button and inputting values on the keyboard. Filmic rgb has no neutral parameters resulting in a no-operation: as soon as the module is enabled, the image is always at least slightly affected.
The curves at the top of the module are read-only and serve as a guide for the operations performed on the sliders. The bright curve is the tone mapping curve, where the abscissa represents the scene exposure, and the ordinate represents the display exposure. The dark curve is the desaturation curve, representing the percentage of saturation as function of the scene exposure.
The middle-grey luminance is the luminance in RGB space of the scene referred 18% grey. Its color picker tool reads the average luminance over the drawn area. If you happen to have a grey card or a color chart (IT8 chart or colorchecker) shot in the scene lighting conditions, then the grey color picker tool can be used to quickly sample the luminance of the grey patch on the picture. In other situations, the color picker can be used to sample the average luminance of the subject.
This setting has an effect on the picture that is analogous to a lightness correction. Values close to 100% do not compress the highlights but fail to recover shadows. Values close to 0% recover greatly the shadows but compress the highlights more harshly and result in local-contrast losses. The standard middle-grey value for linearly encoded camera RGB is 18%. Good values of grey are usually the average luminance of the whole picture or of the subject. In studio and indoors (low dynamic range scenes), proper grey values are found between 15-18%. In high dynamic range scenes (landscapes, back-lit portraits), proper grey values lie between 1.25 and 9%.
When modifying the middle-grey luminance, the white and black exposures are automatically slid accordingly, to preserve the dynamic range from clipping and to help you set the right parameter faster. If you are not happy with the auto adjustment performed by the grey slider, you can correct again the white and black exposure parameters afterwards.
The white relative exposure is the number of stops (EV) between pure white and the middle grey. It is the right bound of the dynamic range. It should be adjusted to avoid highlight clipping. The white exposure color picker tool reads the maximum luminance in RGB space over the drawn area, assumes it is pure white, and sets the white exposure parameter to remap the reading to 100% luminance.
When the grey is set at 18%, the white exposure will always be around 2.45EV. When the grey is set at 100%, the white exposure should be set at 0EV.
The black relative exposure is the number of stops (EV) between pure black and the middle grey. It is the left bound of the dynamic range. The black exposure color-picker tool reads the minimum luminance in RGB space over the drawn area, assumes it is pure black, and sets the black exposure parameter to remap the minimum reading to 0% luminance. The black color picker measurement is very sensitive to noise, and cannot identify if the minimum luminance is pure black (actual data) or just noise. It works better on low ISO pictures and with high quality demosaicing. When the color picker puts the black exposure at -16EV, it is a sign that the measure failed and you need to adjust it manually.
The black relative exposure allows you to choose how far you want to recover lowlights. Contrarily to the white exposure, it is not always possible to completely avoid clipping blacks. Every camera sensor has a maximum physical dynamic range for each ISO value (you can find them measured on DXOmark or DPreview), the software dynamic range in filmic rgb (dynamic range = white exposure - black exposure) should generally not be greater than the physical dynamic range of the sensor (10-14EV in most cases). Note that the dynamic range of the scene can be lower than the camera's one, especially indoors.
The auto-tune color picker combines all three color pickers above, and allows to set the grey, white and black exposures all at once, using the average of the drawn region as the grey estimation, the maximum as the white, and the minimum as the black. This gives good results in landscape photography but usually fails for portraits and indoor scenes.
When no true white and no true black are available on the scene, the maximum and minimum RGB values read on the image are not valid assumptions anymore, so the dynamic range scaling symmetrically shrinks or enlarges the detected dynamic range and the current parameters. This works with all color pickers, and adjusts the current values of white and black relative exposures.
The filmic rgb S-curve is created from the user parameters, by computing the position of virtual nodes and interpolating them, similarly to the tone curve module (but here, the nodes cannot be moved manually). The filmic rgb S-curve is split into three parts: a middle linear part, and two extremities that transition smoothly from the slope of the middle part to the ends of the exposure range.
The contrast slider controls the slope of the middle part of the curve, as illustrated in the graph display.
The contrast parameter drives the slope of the central part of the curve. The larger the dynamic range is, the greater the contrast should be set. This parameter mostly affects mid-tones.
When the contrast is set to 1, this disables the S-curve.
The latitude is the range between the 2 nodes enclosing the central linear portion of the curve, expressed as a percentage of the dynamic range defined in the scene tab (white-relative exposure minus black-relative-exposure). It is the luminance range that is remapped in priority, and it is remapped to the luminance interval defined by the Contrast parameter. It is usually advised to keep the latitude as large as possible, while avoiding clipping. If clipping is observed, you can compensate the effects by either decreasing the latitude, shifting the latitude interval with the shadow/highlights balance parameter, or by decreasing the contrast.
The latitude also defines the range of luminances that is not desaturated at the extremities of the luminance range (see the section called “Extreme luminance saturation (Look tab)”).
By default, the latitude is centered in the middle of the dynamic range. If this produces clipping in one part of the other of the curve, the balance parameter allows to slide the latitude along the slopes, towards the shadows or towards the highlights. This allows more room to be given to one extremity of the dynamic range than to the other, if the image properties demand it.
The darker curve in the graph of the module shows the desaturation of the extremities of the luminance range (black and white): since black and white do not have a color, they should normally be associated to 0% saturation. The default saturation is set to 100% in the range defined by the latitude, and decreases down to 0% outside of that range. One of the advantages of this operations is that, since color components do not clip at the same rate in the image, desaturating them avoids fringes around the high exposures.
If the bright colors feel too desaturated, you should check that the white-relative exposure setting does not clip the high luminance spots, and if not, increase the extreme luminance saturation parameter.
The preserve chrominance setting indicates how the chrominance should be handled by filmic rgb: either not at all, or using one of the provided three norms.
When applying the S-curve transformation independently on each color, the proportions of the colors get modified, which modifies the properties of the underlying spectrum, and ultimately the chrominance of the image. This is what happens if you choose no in the preserve chrominance parameter. This value may yield seemingly “better” results than the other values, but it may negatively impact later parts of the pipeline, when it comes to global saturation for example.
The other values of this parameter all work in a similar way. Instead of applying the S-curve to the channels R, G and B independently, filmic rgb uses a norm N, divides all the three components by that norm, and applies the S-curve to N. This way, the relationship between the channels is preserved.
The different values of the preserve chrominance parameter indicate which norm is used (the value used for N):
max RGB is the maximum value of the three channels R, G and B. It is the behaviour of the previous version of the filmic module. It tends to darken the blue, especially skies, and to yield halos/fringes, especially if some channels are clipped.
luminance Y is a linear combination of the three channels R, G and B. It tends to darkens the reds, and to increase the local contrast in reds.
RGB power norm is the sum of the cubes of the three channels R, G, and B, divided by the sum of their squares - that is to say, (R³ + G³ + B³)/(R² + G² + B²). It is usally a good compromise between the max RGB and the luminance Y values.
There is no "right" choice for the norm, depending on the picture to which it applies - you should experiment and decide for yourself on case by case basis.
The destination parameters set the target luminance values used to remap the tones through filmic rgb. The default parameters will work 99% of the times, the remaining 1% being when you output in linear RGB space (REC709, REC2020) for media handling log-encoded data. These settings are then to be used with caution because darktable does not allow separate pipelines for display preview and for file output.
The target black luminance parameter allows to set the ground-level black of the target medium. Set it greater than 0% if you want raised, faded blacks to achieve a retro look.
This is the middle-grey of the output medium, that is used as a target for the filmic rgb S curve central node. On gamma corrected media, the actual grey is computed with the gamma correction (middle-grey^(1/gamma)), so a middle-grey parameter of 18% with a gamma of 2.2 gives an actual middle-grey target of 45.87%.
The target white luminance parameter allows to set the ceiling level white of the target medium. Set it lower than 100% if you want dampened, muted whites to achieve a retro look.
The power of the output transfer function, often improperly called the gamma (only screen have a gamma), is the parameter used to raise or compress the mid-tones to account for the display non-linearities or to avoid quantization artifacts when encoding in 8 bits file formats. This is a common operation when applying ICC color profiles (except for linear RGB spaces, like REC 709 or REC 2020, which have a linear “gamma” of 1.0). However, at the output of filmic rgb, the signal is logarithmically encoded, which is not something ICC color profiles know to handle. As a consequence, if we let them apply a gamma 1/2.2 on top, it will result in a double up, and the middle-grey will be remapped to 76% instead of 45% as it should.
To avoid double ups and washed pictures, filmic rgb applies a “gamma” compression reverting the output ICC gamma correction, so the middle-grey is correctly remapped at the end. To remove this compression, set the destination power factor to 1.0 and the middle-grey destination to 45%.
The filmic rgb module can seem pretty complex; here is a proposed workflow for processing an image with filmic rgb to obtain a well-exposed picture from a RAW file.
Modify the exposure in the exposure module so that the midtones are clear enough. Do not worry about losing details in the highlights: they will be recovered by the next steps of the processing.
In filmic rgb, start with “neutral” parameters: set the the middle grey luminance to 18.45% in the scene tab, and set the contrast to 1 in the look tab.
Adjust the white-relative and black-relative exposures in the scene tab; set the middle grey luminance as well.
In the look tab, experiment with the contrast parameter. Increase the latitude as much as you can without clipping the curve, slide it with the shadows/highlights balance parameter.
filmic rgb tends to compress the local contrast - you can compensate for that using the local contrast module.
You may also want to increase the saturation in the color balance module, and adjust settings in the tone equalizer module.
Do the last adjustments in filmic rgb, and your picture is now ready for creative processing.
Filmic rgb is a reimplementation of the filmic module, and some adjustments are necessary to switch from one version to the other. This last section underlines the most important differences; a more comprehensive overview is available as a video called darktable 3.0 filmic explained to users of darktable 2.6. The major differences points when it comes to usage are the following:
The default parameters of both modules are not comparable: activating the filmic rgb module with default parameters does not yield the same results as the previous filmic module with default parameters.
The latitude is now expressed in percentage of the dynamic range instead of absolute EV values.
The saturation slider that was present in the previous version of filmic to avoid oversaturation is not necessary anymore since filmic rgb does a much better job at preserving colors.
The previous version of filmic was always using the prophoto RGB profile; filmic rgb respects the working color profile defined in the input color profile module. To keep the same behaviour, you can set the working profile to linear prophoto RGB.
To achieve similar results between the previous version of filmic and filmic rgb, the following steps are suggested:
Transfer the parameters from filmic to filmic rgb. The latitude parameter is now given in percentage of the input dynamic range: compute that percentage from your filmic input values.
Lower the contrast.
Set the extreme luminance saturation to 50%, unless you are using the chrominance preservation.
Adjust the shadows/highlights balance to avoid clipping of the curve
Rais the middle grey luminance a bit, set the dynamic range scaling to approximately 6%.
The old preserve chrominance setting corresponds to the max RGB mode; in that case, do not modify the extreme luminance saturation.
If you experience weird color shifts, change the working color space to prophoto RGB in the input color profile module.
darktable comes with base curve presets that mimic the curves of various manufacturers. These are automatically applied to raw images according to the manufacturer ID found in Exif data. For several camera models darktable comes with base curves adapted for that specific model. A configuration option in the processing tab in preferences dialog (see Section 8.6, “Processing”) defines whether darktable by default should apply the per-camera base curve or the manufacturer one.
You can adjust an existing base curve or create a new one. The base curve is defined by two or more nodes. You can move any node to modify the curve. You can also create additional nodes by clicking on a curve segment between two nodes. With Ctrl+click you generate a new node at the x-location of the mouse pointer and the corresponding y-location of the current curve – this adds a node without risking to accidentally modify the curve. In order to remove a node move it outside of the widget area.
Tip: If you intend to take full manual control of the tonal values with the tone curve module or the zone system module (see Section 3.4.2.13, “Tone curve” and Section 3.4.2.10, “Zone system”) it may be easier to leave the image in linear RGB. Disable the base curve module in this case.
This combobox toggles between “linear” and “logarithmic” view. In the logarithmic view more space is given to the lower values allowing a more fine-grained adjustment of the shadows.
This control triggers the exposure fusion feature. You can choose to merge the image with one or two copies of itself after applying the current base curve and boosting its exposure by a selectable number of ev units. The resulting image is thus a combination of two or three different exposures of the original image. Use this to compress dynamic range for extremely underexposed images or for true HDR input. For best results, use the exposure module (see Section 3.4.1.12, “Exposure”) to apply a suitable adjustment for correctly exposed highlights.
This slider is only visible if the exposure fusion feature is activated. It allows you to set the exposure difference between the merged images in ev units (default 1).
This slider is only visible if the exposure fusion feature is activated. It allows you to chose how the multiple exposures are computed. With a bias of 1 (the default), the image is fused with overexposed copies of itself. With a bias of -1, it is fused with underexposed copies. A bias of 0 tries to preserve the overall lightness of the image by combining both over and under-exposed copies of the image.
A 3D LUT is a tridimensional table which allows for the transforming any RGB value into another RGB value, normally used for film simulation and color grading. The module accepts .cube and .png (haldclut) files. The 3D LUT data are not saved in database nor in the XMP file, only the 3D LUT file path inside the 3D LUT folder is saved (see below). |
The module lut3d needs to find the 3D LUT file at the same place in your 3D LUT folder to calculate the output image. This means you have to backup properly your 3D LUT folder. Sharing an image with its XMP is useless if the recipient doesn't have the same 3D LUT file in his 3D LUT folder.
File selection is inactive as long as the 3D LUT folder (where you have stored your LUT files) is not defined in 3D LUT root folder under preferences/core options/miscellaneous.
A 3D LUT is relative to a specific color space. You have to select the one for which it has been built. Cube files are usually related to REC.709 while most of others are related to sRGB.
The interpolation method defines how to calculate output colors when input colors are not exactly on a node of the RGB cube (described by the 3D LUT). There are three interpolation methods available: tetrahedral (the default one), trilinear and pyramid. Usually you won't see any difference between the interpolation methods except with small size LUTs.
This module helps removing fringe via edge-detection. Where pixels are detected as a fringe, it rebuilds the color from lower-saturated neighboring pixels.
Set the operation mode for detecting fringes. “global average” is usually the fastest but might show slightly incorrect previews in high magnification. It might also protect the wrong regions of color too much or too little by comparison with local averaging. “local average” is slower because it computes local color references for every pixel, which might protect color better than global average and allows for rebuilding color where actually required. The “static” method does not use a color reference but directly uses the threshold as given by the user.
Set the spatial extent of the gaussian blur used for an edge detection. The algorithm uses the difference of gaussian-blurred and original image as an indicator for edges (a special case of the “difference of gaussians” edge detection). Try increasing this value if you either want a stronger detection of the fringes or the thickness of the fringe edges is too high.
Sets the threshold over which the edge of a pixel is counted as a “fringe”. The colors of the affected pixels will be rebuild from neighboring pixels. Try lowering this value if there is not enough fringe detected and try increasing this value if too many pixels are desaturated. You may additionally want to play around with the edge detection radius.
This module implements a generic color look up table in Lab space. The input is a
list of source and target points, the complete mapping will be interpolated using
splines. The resulting luts are editable by hand and can be created using the
darktable-chart utility to match given input (such as
hald-cluts and RAW/JPEG with in-camera processing pairs). See
Section 10.3, “Using darktable-chart
” for details.
|
When you select the module in darkroom mode, it should look something like the image above (configurations with more than 24 patches are shown in a 7x7 grid instead). By default, it will load the 24 patches of a color checker classic and initialise the mapping to identity (no change to the image).
The grid shows a list of colored patches. The colors of the patches are the source points. The target color of the selected patch is shown as offsets controlled by sliders in the GUI under the grid of patches. An outline is drawn around patches that have been altered, i.e. the source and target colors differ.
The selected patch is marked with a white square, and its number is displayed in the combo box below. Select a patch by left clicking on it, or using the combo box, or using the color picker.
To modify the color mapping, you can change source as well as target colors.
The main use case is to change the target colors. You start with an appropriate palette of source colors (either from the presets menu or from a style you download). You can then change lightness (L), green-red (a), blue-yellow (b), or saturation (C) of the patches' target values via sliders.
To change the source color of a patch you select a new color from your image by using the color picker, and Shift+click on the patch you want to replace. You can switch between point and area sampling mode from within the global color picker panel (see Section 3.3.6, “Global color picker”).
To reset a patch, double-click it. Right-click a patch to delete it. Shift+click on empty space to add a new patch (with the currently picked color as source color).
This module reduces noise in your image but preserves structures. This is accomplished by averaging each pixel with some surrounding pixels in the image. The weight of such a pixel in the averaging process depends on the similarity of its neighborhood with the neighborhood of the one pixel to be denoised. A patch with a certain size is used to measure that similarity. As denoising is a resource hungry process, it slows down pixelpipe processing significantly; consider activating this module late in your workflow.
In case your film is scanned with a DSLR, you will need to correct the color deviations of the DSLR first, in the workflow and in the pipeline, to ensure maximal reliability of the film color corrections. That is :
white-balancing against the light source used to scan,
applying the standard matrice in the color input profile,
ensuring exposure is set to use a maximum range of the histogram without clipping.
You are advised to take a profiling picture of the light source alone, with no film mounted, to sample the white balance. That sampled value can then be copy-pasted to all your images processed with the same setting.
The film should be scanned such that the frame is visible, so you have a visible sample of the unexposed film base (substrate ?) that you can use to set the Dmin. If your film holder completely hides this mask, you are advised to take a profiling picture of the shifted film in the holder, sample the Dmin once for the film stripe on that profiling picture, and copy-paste the settings for all other edits.
The modules “filmic”, “filmic RGB” and “base curve” should better be disabled when working with scanned film.
The working color profile, for darktable's pipeline, should be set to Rec2020 linear or to a profile that actually represents your film emulsion color space. See such ICC profiles provided here :
It is strongly recommended that you set the parameters following the order that the GUI presents (from top to bottom, tabs from left to right) since the next settings depend on the previous.
The first option is the drop-down menu, Film Stock. Here, you choose whether the negative you are working with is color, or black and white. Selecting the black and white option simply removes sliders in the module which pertain to color information. This tidies up the interface by removing controls you don't need.
Next are 3 tabs: “Film Properties”, “Corrections”, and “Print Properties”.
The “Colour of the film base” section allows you to sample an area from your scan which comprises the base film stock. This is the area just outside of the image (IE. an unexposed part of the film). When working with black and white negatives, you can leave this at its default value of white. If working on color film, click the eye dropper to the right of the color bar. This will create a bounding box which covers about 98% of your image. Then, left-click and drag across an area of your negative which only has unexposed film stock in it. This will automatically calculate values for the “D min red component”, “D min green component” and “D min blue component” sliders. At this point, it is likely that your image will still look too dark.
Next we move to the D Max slider in the “Dynamic range of the film” section. This slider effectively sets our white point. Dragging this to the left will make the neg brighter. Dragging to the right will make the neg darker. If adjusting manually, it's a good idea to watch your histogram and ensure that you don't push the highlights into clipping (where the histogram is pushed off the right hand side of the graph). Again, you can use the eye dropper (on the right) to allow Negadoctor to automatically calculate this value to ensure maximal use of the histogram without clipping. If using the eye dropper, left-click and drag to draw a rectangle across only the exposed parts of the neg. Don't include the unexposed film stock in the rectangle, as this will skew the result.
Then, the “Scan exposure bias” slider under “Scanner exposure settings” allows us to set a black point. Dragging this to the left will make the neg brighter. Dragging to the right will make the neg darker. If adjusting manually, it's a good idea to watch your histogram and ensure that you don't push the shadows into clipping (where the histogram is pushed off the left hand side of the graph). Again, you can use the eye dropper to allow Negadoctor to automatically calculate any needed offset.
By now, your neg should be looking pretty close to what you were expecting to see. However, for color negs, you might need an extra step.
Moving to the “Corrections” tab, we have sliders which allow for corrections within both the shadows regions, and the highlight regions. Again, there are eye droppers to allow for automatic definition of shadow color casts, and/or highlight color casts. For shadows, select the eye dropper, and left-click and drag a large rectangle across the majority of your image. Negadoctor will calculate appropriate values, and these will be displayed under “shadows red offset”, “shadows green offset”,and “shadows blue offset”. |
This settings should not be needed for most well-preserved negatives and will mostly be useful for old and poorly-preserved negatives having a decayed film base that induces undesirable color casts. Be aware that the shadows color cast setting will have no effect if the “scan exposure bias” setting, in the “film” tab, is set to a non-zero value.
It's important to understand that the Highlights white balance should always be calculated after shadows color cast, as the values of the shadows color cast sliders play a role in the values chosen for Highlights white balance (if using the eye dropper to have Negadoctor calculate those values). For highlight color casts, select the eye dropper and click and drag a rectangle across the brightest area of your image.
This setting should not be needed if your film has been exposed with a light source close to the one for which it was balanced, for example if you shot a scene lit by daylight on a daylight-balanced film.
On the “Print Properties” tab, we have sliders for “paper black”, “paper grade”, “paper gloss”, and “print exposure” adjustment. |
These settings will mimic the tonal effect of photochemical papers that will finally create the real image, in the analog process.
For the “paper black”, select the eye dropper, and click and drag across only the exposed part of the negative. If you can see unexposed film stock around the edges of your image, ensure that these areas are excluded from the drawn rectangle for calculating the Paper black value.
Paper black represents the density of the blackest silver-halide cristal available on the virtual paper. Since that black density always results in non-zero luminance in the analog process, but the digital pipeline usually expects black encoded at zero (RGB value), this setting lets you remap paper black to pipeline black with an offset.
“Paper grade” is your gamma (contrast) control, and defaults to a value of 4. If all has gone well, this value (4) minus the value of D max (from the “Film Properties” tab) should leave you with a value between 2 and 3.
“Paper gloss” is essentially a highlights compression tool. As you drag this slider to the left, you will see in the histogram that the highlight values are being compressed (pushed to the left). Adjust this accordingly, so that your highlights are not clipped in the histogram or to simulate a mat printing with low-contrast highlights.
The “Print exposure adjustment” slider is to correct any last minute clipping of highlights, although if you have followed all prior instructions, you shouldn't need it. It is also possible to increate print exposure while deccreasing the paper gloss to brighten midtones without losing highlights.
In this module you define the input color profile, i.e. how colors of your input image are to be interpreted. You also have an option to have colors confined to a certain gamut in order to mitigate some (infrequent) color artifacts.
Choose the profile or color matrix to apply, darktable offers many widespread matrices along with an enhanced matrix for some camera models. The enhanced matrices were processed by the darktable team in order to provide a look closer to the manufacturer's.
You can also supply your own input ICC profiles and put them into $DARKTABLE/share/darktable/color/in or $HOME/.config/darktable/color/in. $DARKTABLE is used here to represent darktable's installation directory and $HOME your home directory. One common source of ICC profiles is the software that is shipped with your camera; it often contains profiles specific to your camera model. You may need to activate module unbreak input profile (see Section 3.4.1.17, “Unbreak input profile”) to use your extra profiles.
If your input image is a low dynamic range file like JPEG, or a raw in DNG format, it might already contain an embedded ICC profile which darktable will use as a default. You can always overrule darktable and select a different profile. Select “embedded icc profile” to restore the default.
This combobox lets you activate a color clipping mechanism. In most cases you can leave it at its default “off” state. However, if your image shows some specific features like highly saturated blue light sources, gamut clipping might be useful to avoid black pixel artifacts. See Section 3.2.6.6, “Possible color artifacts” for more background information.
You can select from a list of RGB profiles. Input colors with a saturation that exceeds the permissible range of the selected profile get clipped to a maximum value. “linear Rec2020 RGB” and “Adobe RGB (compatible)” allow for a broader range of unclipped colors, while “sRGB” and “linear Rec709 RGB” produce a tighter clipping. You should select the profile that prevents artifacts while still maintaining highest color dynamics.
This module is used to tweak the exposure. It is directly linked to the histogram panel. If you correct exposure graphically, using the histogram (see Section 3.3.8, “Histogram”), you automatically activate the exposure module. The histogram simply acts as a view for the exposure module. |
You can activate multiple instances of this module each with different parameters acting on different parts of the image which you select by a drawn mask (see Section 3.2.4, “Multiple instances” and Section 3.2.5.5, “Drawn mask”). The histogram is always linked to the last instance in pixelpipe.
This module is responsible for one of the most basic steps in each raw image development. An exposure adjustment value allows you – within certain limits – to correct for under- or overexposure. A shift by 1EV is equivalent to a change of exposure time by a factor of 2.
Positive exposure corrections will make the image brighter. As a side effect noise level gets higher. Depending on the basic noise level of your camera and the ISO value of your image, positive exposure compensations with up to 1EV or 2EV still give reasonable results.
Negative exposure corrections will make the image darker. Given the nature of digital images this can not correct for fully blown out highlights but allows to reconstruct data in case that only some of the RGB channels are clipped (see also Section 3.4.1.27, “Highlight reconstruction”).
A black level adjustment is a basic tool to increase contrast and pop of an image. The value defines at what threshold dark gray values are cut off to pure black. Use with care as the clipped values can not be recovered in other modules further down the pixelpipe. Please also have a look at the tone curve module (see Section 3.4.2.13, “Tone curve”) and the levels module (see Section 3.4.2.12, “Levels”) which can produce similar results with less side effects as they come later in pixelpipe.
The exposure module has two modes of operation.
In “manual” mode you directly define the value for exposure correction that you want to apply to your image.
In “automatic” mode darktable analyses the histogram of your image. You select a reference point within the histogram as a percentile and define a target level – darktable automatically calculates the exposure compensation that is needed to shift the selected position to that target value. The computed exposure compensation value is displayed in the module's GUI for your information.
The “automatic” mode has a black level adjustment that works as in manual mode.
Automatic adjustment is only available for raw images. A typical use case is deflickering of time-lapse photographs. You apply an automatic exposure correction with the same set of parameters to all images of the series – differences in lighting get compensated so that the final video sequence does not show any flickering.
Automatically remove the camera exposure bias (only available in “manual” mode). The camera exposure bias is the EV compensation of the camera lightmeter, commonly used to prevent highlight clipping, for photographers who expose to the right of the histogram. This feature relies on reading the EXIF field
ExposureBiasValue
which must be correctly filled in the RAW file by the camera for it to work.
Note: for Fuji RAWs, it will be necessary to add aditional +0.75 EV, for an overall correction of +1.25 EV, to compensate for their native underexposure.
darktable can calculate correct black level and exposure values for your image based on
the content of a rectangular area. The adjustment slider lets you define what percentage
of bright values are to be clipped out in the calculation. Pressing the
icon starts the calculation and lets you draw a rectangular area of your choice using
your mouse. This feature is only available in “manual” mode.
Defines a location in the histogram for automatic exposure correction. A percentile of 50% denotes a position in the histogram where 50% of pixel values are below and 50% of pixel values are above. For more details see percentile . Only available in “automatic” mode.
This module is used to crop, rotate and correct perspective distortions of your image. You can overlay your image with various helpful guidelines that assist you using the tools. |
Some of the tools of this module, namely adjustment of angle and corrections of perspective distortion, will require the original image data to be interpolated. For best sharpness results set “lanczos3” as pixel interpolator in processing tab in preferences dialog (see Section 8.6, “Processing”).
Whenever the user interface of this module is in focus, you will see the full uncropped image overlaid with handles and guiding lines.
First off, select what aspect ratio you want and size the crop boundaries by dragging border and corner handles. Use the button right of the aspect box, to swap between portrait and landscape mode. You can move around the crop rectangle by holding down left mouse button and move around. When you are done and want to execute the crop, just give focus to another module or double-click into the image base. You can at any time change your crop area by just revisiting this module.
This tool corrects the rotation angle helping you level an image. You can either set a numerical value or use your mouse directly on the image. To use your mouse, right-click, hold it down and draw a line along a suited horizontal or vertical feature; as soon as you release the mouse button the image is rotated so the line you drew matches the horizontal/vertical axis.
This tool is used to correct perspective distortions in your image. Useful for example when you shoot a high building from ground with a short focal length, aiming upwards with your camera. The combobox lets you select the type of correction you want to use :
vertical | if you want to limit the correction to vertical lines |
horizontal | limit the correction to horizontal lines |
full | if you want to correct horizontal and vertical lines |
Depending on the selected correction type you will see two or four straight adjustment lines overlaid to your image. Two red circles on every line let you modify the line positions with your mouse. Each line additionally carries a “symmetry” button. If activated (and highlighted in red) all movements of the affected line will be mirrored by the opposite line. In order to correct perspective distortions, you need to find suitable horizontal and/or vertical features in your image and align the adjustment lines with them. When done, press the “OK” button, which is located close to the center of your image. The image will be corrected immediately. You can at any time come back and refine your corrections by selecting “correction applied” in combobox keystone. |
Use this options to avoid black edges on the image borders. Useful when you rotate the image.
Here you can change what aspect ratio you want to have on the result, thus constraining the proportion of width and height of the crop rectangle to the aspect ratio of your choice. Many common numerical ratios are pre-defined. A few special aspect ratios deserve explanation:
freehand | free forming the rectangle without any ratio restrictions |
original image | this option constrains the ratio to be equal to image ratio |
square | this option constrains the ratio to be 1 |
golden cut | this option constrains the ratio to be equal the golden number |
You can also select any other ratio after opening the combobox and typing it in the form of “x:y”. If you want a certain aspect ratio to be added to the pre-defined ones you can do so by including a line of the form
plugins/darkroom/clipping/extra_aspect_ratios/foo=x:y
into darktable's configuration file
$HOME/.config/darktable/darktablerc
. Here “foo”
defines the name of the new aspect ratio and “x” and “y” the
corresponding numerical values.
In case the chosen guidelines are not symmetrical relative to the image frame, you can flip them on the horizontal,vertical or both axis.
In the margins tab, you can directly set the distance between the border of the uncropped image and the crop rectangle for the four margins. The values are specified in percent of the width/height of the uncropped image. These values will be updated if you move or resize the crop rectangle with the mouse.
This module is designed to automatically correct for converging lines, a form of perspective distortions frequently seen in architectural photographs. The underlying mechanism is inspired by Markus Hebel's ShiftN program. |
Perspective distortions are a natural effect when projecting a three dimensional scene onto a two dimensional plane and cause objects close to the viewer to appear larger than objects further away. Converging lines are a special case of perspective distortions frequently seen in architectural photographs. Parallel lines when photographed at an angle get transformed into converging lines that meet at some vantage point within or outside the image frame.
This module is able to correct converging lines by warping the image in such a way that the lines in question become parallel to the image frame. Corrections can be applied in vertical and horizontal direction, either separately or in combination. In order to perform an automatic correction the module analyzes the image for suitable structural features consisting of line segments. Based on these line segments a fitting procedure is started that determines the best values of the module parameters.
Clicking the “get structure” icon (
) causes darktable to analyze the image for structural elements. Line segments are
detected and evaluated. Only lines that form a set of either vertical or horizontal lines
are used for further processing steps. The line segments are displayed as overlays on the
image base. A color code describes what type of line darktable has found:
green | lines that are selected as relevant vertical converging lines |
red | lines that are vertical but are not part of the set of converging lines |
blue | lines that are selected as relevant horizontal converging lines |
yellow | lines that are horizontal but are not part of the set of converging lines |
grey | other lines identified but not of interest to this module |
Lines marked in red or yellow are regarded as outliers and are not taken into account for the automatic fitting step. This outlier elimination involves a statistical process with random sampling so that each time you press the “get structure” button the color pattern of the lines will look a bit different. You can manually change the status of line segments: left-clicking on a line selects it (turns the color to green or blue) while right-clicking deselects it (turns the color to red or yellow). Keeping the mouse button pressed allows for a sweeping action to select/deselect multiple lines in a row; the size of the select/deselect brush can be changed with the mouse wheel. Holding down the Shift key and keeping the left or right mouse button pressed while dragging selects or deselects all lines in the chosen rectangular area.
Clicking one of the “automatic fit” icons (see below) starts an optimization process which finds the best suited parameters. The image and the overlaid lines are then displayed with perspective corrections applied.
This parameter controls a rotation of the image around its center and can correct for a skewed horizon.
This parameter corrects converging lines vertically. In some cases you get a more naturally looking image if you correct vertical distortions not to their full extent but rather at an 80 to 90% level. If desired just reduce the value after having performed the automatic correction.
This parameter shears the image along one of its diagonals and is needed when correcting vertical and horizontal perspective distortions simultaneously.
If activated a number of guide lines is laid over the image to help you judge the quality of the correction.
When activated the automatic cropping feature clips the image to get rid of any black corners. At your choice you can either clip to the “largest area” or to the largest rectangle maintaining the original aspect ratio (“original format”). In the latter case you can manually adjust the automatic cropping result: left click into to clip region and move it around. The size of the region gets modified automatically excluding any black corners.
This parameter controls how lens and camera specifics are taken into account. If set to “generic” a focal length of 28mm on a full-format camera is assumed. If set to “specific”, focal length and crop factor can be set manually.
The focal length of the lens used. The default value is taken from the Exif data of your image. This parameter is only effective and visible if the “specific” lens model has been selected.
The crop factor of the camera used. You will typically need to set this value manually. This parameter is only effective and visible if the “specific” lens model has been selected.
If the “specific” lens model has been selected this parameter allows for a free manual adjustment of the image's aspect ratio.
Clicking on one of the icons starts an automatic fitting of the module parameters based
on the selected vertical and/or horizontal lines. You can choose to correct only
vertical distortions (
), only horizontal distortions (
), or both types of distortions simultaneously (
). Ctrl+clicking on either icon only fits rotation.
Shift+clicking on either icon only fits vertical and/or horizontal
lens shift.
Clicking on the
icon causes the image to be (re-)analyzed for suitable line segments.
Shift+clicking applies a prior contrast enhancement step,
Ctrl+clicking applies an edge enhancement step. Both variations can
be used alone or in combination if the default is not able to detect a sufficient number
of lines. Clicking on the
icon discards all collected structural data. By clicking on the
icon you can switch the overlay display of line segments on and off.
The haze removal module is designed to automatically reduce the effect of dust and haze in the air, which often reduces the color contrast in landscape photographs. In general, this module may be employed to give pictures a color boost specifically in low-contrast image regions. |
The higher the haze density in the air and the longer the distance between the camera and the photographed object the less colorful the object appears in the image. Haze absorbs light approaching from the objects into the direction of the camera but it is also a source of diffusive background light. Thus the haze removal module estimates for each image region the amount of haze in the scene first and then removes the diffusive background light according to its local strength and recovers the original object light.
The haze removal module has two controls that determine the amount of haze reduction and limit the distance up to which haze is removed. Setting both controls to unity maximizes the amount of haze removal but this is also likely to produce some artifacts. Removing the atmospheric light entirely may render the image flat and may result in an unnatural looking style. Optimal values are typically below unity and these are rather image dependent but also a matter of personal aesthetic preferences.
The strength parameter controls the amount of haze removal. Setting it to unity, the module removes 100 percent of the detected haze between the camera and up to the specified distance; see below. Negative values for the strength increase the amount of haze in the image.
This parameter limits the distance up to which haze is removed. For small values, haze removal is restricted to the foreground of the image. Haze is removed from the foreground to the far background if the distance parameter is set to unity. In case of a negative strength the distance control has no effect.
This module adds a correction curve to image data, which is required if you have selected certain input profiles in module input color profile. |
If you decide in module input color profile to use an ICC profile from the camera manufacturer, a correction curve very frequently needs to be pre-applied to image data – or else the final output looks much too dark. This extra processing is not required if you use darktable's standard or enhanced color matrices. The correction curve is defined with a linear part extending from the shadows to some upper limit and a gamma curve covering mid-tones and highlights. For further reading please also have a look at darktable's neighbouring project UFRaw.
Set the upper limit for the region counted as shadows and where no gamma correction is performed. Typically values between 0.0 and 0.1 are required by the profile.
This module is able to correct certain lens flaws, namely distortions, transversal chromatic aberrations (TCA) and vignetting. It relies on the external library lensfun, which comes with correction profiles for many (but not all) common cameras and lenses. |
In order to perform lens corrections the module uses Exif data of your image to identify the specific camera/lens combination and collects the needed correction parameters from a profile in lensfun's database.
The camera make and model as determined by Exif data. You can override this manually and select your camera from a hierarchical menu.
Only lenses with correction profiles matching the selected camera will be shown.
The lens make and model as determined by Exif data. You can override this manually and select your lens from a hierarchical menu. This is mainly needed for pure mechanical lenses, but may also be needed for off-brand / third party lenses.
Corrections additionally depend on certain photometric parameters that are read from Exif data: focal length (needed for distortion, TCA, vignetting), aperture (needed for TCA, vignetting) and focal distance (needed for vignetting). Many cameras do not record focal distance in their Exif data; most likely you need to set this manually.
You can manually override all automatically selected parameters. Either take one of the predefined values from the pull-down menu; or – with the pull-down menu still open – just type in your own value.
If your system's lensfun library has no correction profile for the automatically identified camera/lens combination the controls for the three photometric parameters are not displayed, and you get a warning message instead. You may try to find the right profile yourself by searching for it in the menu. If you can't find your lens, check if it is in the list of currently supported lenses, and refer to the lensfun-update-data tool. If there is no matching profile for your lens, please visit this lens calibration service offered by Torsten Bronger, one of darktable's users. Alternatively you may go to lensfun's home page and learn how to generate your own set of correction parameters. Don't forget to share your profile with the lensfun team!
This combobox gives you a choice about which corrections (out of distortion, TCA and vignetting) darktable shall apply. Change this from its default “all”, if your camera has already done some internal corrections (e.g. of vignetting), or if you plan to do certain corrections with a separate program.
In addition to the correction of lens flaws, this module can change the projection type of your image. Set this combobox to the aimed projection type, like “rectilinear”, “fish-eye”, “panoramic”, “equirectangular”, “orthographic”, “stereographic”, “equisolid angle”, “thoby fish-eye”.
This slider allows you to adjust the scaling factor of your image. Pressing the auto scale button (right to the slider) will let darktable find the best fit to avoid black corners.
The default behavior of this module is to correct lens flaws. Switch this combobox to “distort” in order to simulate the behavior of a specific lens (inverted effect).
This slider allows to override the correction parameter for TCA. You can also use this slider to manually set the parameter in case the lens profile does not contain TCA correction. Look out for colored seams at features with high contrast edges and adjust this parameter and the following one to minimize those seams.
This slider allows to override the correction parameter for TCA. You can also use this slider to manually set the parameter in case the lens profile does not contain TCA correction.
This module reduces noise in your image but preserves sharp edges. This is accomplished by averaging pixels with their neighbors, taking into account not only the geometric distance but also the distance on the range scale, i.e. differences in the RGB values. As denoising is a resource hungry process, it slows down pixelpipe processing significantly; consider to activate this module late in your workflow. The module can be really effective if some RGB channel is more noisy than the 2 other channels. In such a case, use the channel mixer module to see the channels one by one, in order to set the blur intensities accordingly.
The darktable team, with the help of many users, has measured noise profiles for various cameras. Differentiated by ISO settings we evaluated how the noise statistics develop with brightness for the three color channels. Our set of profiles covers well above 200 popular camera models from all major manufacturers.
darktable stores noise profiles in an external json file. This file can be found in
$DARKTABLE/share/darktable/noiseprofile.json
where
$DARKTABLE
represents the darktable installation directory. The json
format is quite straightforward and explained in depth in
json.org. You can replace the default noise profiles
by your own ones and specify that file on the command line when starting darktable. For
more details see Section 1.1.1, “darktable
binary”. If you generate your
own noise profiles don't forget to share your results with the darktable team!
/!\ WARNING /!\ The darkroom zoomed out preview is not completely accurate. Always check your result at 100% zoom level!
Note that (almost) all sliders of this module can take values higher than their visible bounds by entering values using Right-click and keyboard.
Based on Exif data of your raw file, darktable will automatically determine the camera model and ISO setting. If found in its database, the corresponding noise profile will be used. If your image has an intermediate ISO value, the statistical properties will be interpolated between the two closest datasets in the database, and this interpolated setting will show up as the first line in the combo box. You also have the option to manually overwrite this selection to suit your personal preferences better. The top-most entry in the combo box brings you back to the profile darktable deems most suited.
This module can eliminate noise by two different core algorithms. Both “non-local means” and “wavelet” can tackle efficiently luma (lightness) noise and chroma (color) noise. “wavelet” mode also lets you adjust the force of the denoising depending on the noise coarseness. If needed you can apply two instances of this module (see Section 3.2.4, “Multiple instances”): one instance to tackle luma noise with blend mode “lightness” or “HSV lightness”, and another one to tackle chroma noise with blend mode “color” or “HSV color ”. An example of the use of two instances with blending modes is available within the presets of this module. For more information on blend modes have a look at Section 3.2.5.4, “Blending operators”. The module also offers an automatic mode for each algorithm. Automatic modes allow to set module's parameters in an easier way, as it guesses several parameters from the profile. All sliders of this module can take values higher than their bounds if needed using Right-click.
As white-balance amplifies the RGB channels differently, they exhibit different noise levels. This checkbox makes the algorithm adaptive to white balance. This option should be disabled on the second instance if you use a first instance with a color blend mode.
This slider is only available if mode “non-local means” is selected. It controls the size of the patches being matched when deciding which pixels to average (see also Section 3.4.1.9, “Denoise – non local means”). Set this to higher values as the noise gets higher. Beware that high values may smooth out small edges though. Processing time will stay about the same.
This slider is only available if mode “non-local means” is selected. It controls how far from a pixel the algorithm will try to find similar patches. Increasing the value can give better results for very noisy images when coarse grain noise is visible, but you should better use the scattering slider instead. The processing time is hugely impacted by this parameter: it depends on the square of the parameter. A lower value will make execution faster, a higher value will make it slower.
This slider is only available if mode “non-local means” is selected. Like the search radius, it controls how far from a pixel the algorithm will try to find similar patches, but does this without increasing the number of patches considered. As such, processing time will stay about the same. Increasing the value will reduce coarse grain noise, but may smooth local contrast. This slider is particularly effective to reduce chroma noise.
This slider is only available if mode “non-local means” or “non-local means auto” is selected. It controls the amount of details which should be preserved by the algorithm. It can be used as a way to control the amount of luma noise smoothing: giving a big value to this slider will result mostly in chroma noise smoothing with little smoothing of luma noise. This slider has no effect if patch-size is set to 0.
These curves are only available if mode “wavelet” is selected. The noise of an image is usually not only fine grain, but also more or less coarse grain. These curves allow to denoise more or less depending on the coarseness of the visible noise. The left of the curve will act on very coarse grain noise, while the right of the curve will act on very fine grain noise. Pushing up the curve will result in more smoothing, pulling it down will result in less smoothing. As an example, you can preserve very-fine grain noise by pulling down the rightest point of the curve until the minimum value. As another example, if you are tackling chroma noise with a blend mode, you can push up the right part of the curve, as colors are not supposed to change a lot on fine grain scales: this will help especially if you see some isolated pixel left undenoised. |
Considering R, G, and B curves, the best way to use them is to look at one of the channel using the channel mixer module in gray mode, denoise this particular channel, and then do the same for the other channels. This way, you can take into account the fact that some channel may be noisier than others into your denoising. Be aware that guessing which channel is noisy without actually seeing the channels individually is not straightforward and can be counterintuitive: a pixel which is completely red may not be caused by noise on the R channel, but actually by noise on B and G channels.
This parameter is here to fine-tune the strength of the denoise effect. The default value has been chosen to maximize the peak signal to noise ratio. It's mostly a matter of taste if you prefer a rather low noise level at the costs of a higher loss of detail, or if you accept more remaining noise in order to have finer structures better preserved within your image.
This option is available in “wavelets” and “non local means” modes. It allows to denoise more agressively the shadows or the highlights. Lower the value to denoise more the shadows than the highlights. Usually, as noise increases, you will need to lower this value.
This option is available in “wavelets” and “non local means” modes. It allows to correct the color cast that may appear in the shadows. Increase this value if dark shadows appear too greenish, decrease it if they appear purple-ish.
This option is available in “auto” modes. In these modes, darktable tries to derive denoising parameters from the camera profile. Depending on your image the automatically derived parameters may not be optimal. E.g. if your image is heavily underexposed and you lifted the exposure, you will have to increase this parameter to get a proper denoising. This parameter should reflect the amplification you add to your image: if you add 1EV of exposure, the signal is multiplied by 2, so this parameter should be set to 2.
Demosaic is an essential step of any raw image development process.
A detailed description would be beyond the scope of this manual. In a nutshell, the sensor cells of a digital camera are only able to record different levels of lightness, not different color. In order to get a color image, each cell is covered by a color filter, either in red, green or blue. Due to the color sensitivity of the human vision, there are two times more green cells than red or blue. Filters are arranged in a certain mosaic, called Bayer pattern. Therefore each pixel of your image originally only has information about one color channel. Demosaic reconstructs the missing color channels by interpolation with data of the neighboring pixels. For further reading see the Wikipedia article on the Bayer filter.
As interpolation is prone to produce artifacts, various different demosaic algorithms have been developed in the past. Artifacts would typically be visible as moiré-like patterns when you strongly zoom into your image. Currently darktable supports PPG, AMAZE, and VNG4. All these algorithms produce high quality output with a low tendency to artifacts. AMAZE is reported to sometimes give slightly better results. However, as AMAZE is significantly slower, darktable uses PPG as a default. VNG4 produces the softest results of the three algorithms, but if you see "maze" artifacts, try VNG4 to eliminate them.
There are a few cameras whose sensors do not use a Bayer filter. Cameras with an "X-Trans" sensor have their own set of demosaicing algorithms. The default algorithm for X-Trans sensors is Markesteijn 1-pass, which produces fairly good results. For a bit better quality (at the cost of much slower processing), choose Markesteijn 3-pass. Though VNG demosaic is faster than Markesteijn 1-pass on certain computers, it is more prone to demosaicing artifacts.
Additionally, darktable supports a special demosaicing algorithm – passthrough (monochrome). It is not a general-purpose algorithm that is useful for all the images. It is only useful for cameras with the color filter array physically removed from the sensor, e.g. physically scratched off. Normally, demosaic reconstructs the missing color channels by interpolation with data of the neighboring pixels. But since the color filter array is not present, there is nothing to interpolate, so this algorithm simply sets all the color channels to the same value, which results in pixels being gray, thus producing a monochrome image. This method guarantees that there are no interpolation artifacts that would otherwise be present should one of the standard demosaicing algorithm be used.
Some further parameters of this module can activate additional averaging and smoothing steps. They might help to reduce remaining artifacts in special cases.
Demosaic is always applied when exporting images. Demosaic is done on monitor display only when zoom is greater than 50% or when the according preference setting “demosaicing for zoomed out darkroom mode” (see Section 8.4, “Darkroom”) is set accordingly. Else color channels are taken from neighboring pixels without an expensive interpolation.
Set the demosaic method. darktable currently supports PPG, AMAZE, and VNG4 for Bayer sensors. For X-Trans sensors darktable supports VNG, Markesteijn 1-pass, and Markesteijn 3-pass.
Set the threshold for an additional median pass. Defaults to “0” which disables median filtering. This option is not shown for X-Trans sensors.
Raw denoise allows you to perform denoising on the data before it gets demosaiced. It is ported from dcraw. |
Set the threshold for noise detection. Higher values lead to stronger noise removal and higher loss of image detail.
The noise of an image is usually not only fine grain, but also more or less coarse grain. These curves allow to denoise more or less depending on the coarseness of the visible noise. The left of the curve will act on very coarse grain noise, while the right of the curve will act on very fine grain noise. Pushing up the curve will result in more smoothing, pulling it down will result in less smoothing. As an example, you can preserve very-fine grain noise by pulling down the rightest point of the curve until the minimum value. As another example, if you are tackling chroma noise with a blend mode, you can push up the right part of the curve, as colors are not supposed to change a lot on fine grain scales: this will help especially if you see some isolated pixel left undenoised. Considering R, G, and B curves, the best way to use them is to look at one of the channel using the channel mixer module in gray mode, denoise this particular channel, and then do the same for the other channels. This way, you can take into account the fact that some channel may be noisier than others into your denoising. Be aware that guessing which channel is noisy without actually seeing the channels individually is not straightforward and can be counterintuitive: a pixel which is completely red may not be caused by noise on the R channel, but actually by noise on B and G channels.
You control the detection sensitivity with the threshold parameter and the level of elimination with the strength parameter.
The threshold of the detection, i.e. how strong a pixel's value needs to deviate from its neighbors to be regarded as a hotpixel.
This will extend the detection of hotpixels, it will even regard a pixel as hot if a minimum of only three (instead of four) neighbor pixels deviate by more than the threshold level.
The module has no parameters. On activation it will automatically try to optimize away visible CA's.
The underlying model assumes as input an uncropped photographic image. The module is likely to fail when you zoom into the image, as in that case it will only receive parts of your photograph as input in darktable's pixelpipe. As a consequence, chromatic aberrations do not get corrected properly in the center view. This limitation only applies to interactive work, not to file export.
This module currently only works for images recorded with a Bayer sensor (which is the sensor used in the majority of cameras).
This module tries to reconstruct color information that is usually clipped because of incomplete information in some of the channels. If you do nothing, your clipped areas are often toned to the not clipped channel. For example, if your green and blue channels are clipped, then your image will appear red in the clipped areas. |
You can choose between three methods for highlight reconstruction.
“Clip highlights” just clamps all the pixels to the white level. Effectively this converts all clipped highlights to neutral grey tones. This method is most useful in cases where clipped highlights occur in non-colored areas like clouds in the sky.
“Reconstruct in LCh” analyses each pixel having at least one channel clipped and transforms the information in LCh color space in an attempt to correct the clipped pixel using the values of the other (3 for Bayer or 8 for X-Trans) pixels in the affected sensor block. This method usually does a better job than “Clip highlights” as some details in the clipped areas are preserved. However it is incapable to reconstruct any color information – the reconstructed highlights will all be monochrome, but brighter and with more detail than with “Clip highlights”. This method works fairly well with a high-contrast base curve (such as most manufacturers apply to their JPEG), which renders highlights desaturated. This method is a good option on naturally desaturated objects such as clouds.
“Reconstruct color” uses an algorithm that transfers color information from the unclipped surroundings into the clipped highlights. This method works very well on areas with homogeneous colors and is especially useful on skin tones with smoothly fading highlights. It fails in certain cases where it produces maze-like artifacts at highlights behind high-contrast edges, such as fine, well-exposed structures in front of overexposed background (for instance ship masts or flags in front of the blown-out sky).
Tip: for highlight reconstruction to be effective you need to apply a negative EV correction in the exposure module (see Section 3.4.1.12, “Exposure”). If you want to avoid a general darkening of your image you can use darktable's mask feature in that module to limit the EV correction to only the highlights (see Section 3.2.5.5, “Drawn mask” and Section 3.2.5.6, “Parametric mask”).
The only control element of this module is a color selector which allows to adjust for
different colors of your film material. Clicking on the colored field will open a color
selector dialog which offers you a choice of commonly used colors, or allows to define a
color in RGB color space. You can also activate a color picker by pressing
and take a color probe from your image – preferably from the unexposed border
of your negative.
The camera specific black level of the four pixels in the RGGB Bayer pattern. Pixels with values lower than that level are considered to contain no valid data.
The camera specific white level. All pixels with values above are likely to be clipped and handled accordingly in the highlight reconstruction module (see Section 3.4.1.27, “Highlight reconstruction”). Pixels with values equal to the white level are considered to be white.