Recent Changes - Search:

Menu

editer le Menu

IlluminaGuide2019

ILLUMINA v2.0 USER'S GUIDE

Martin Aubé, Ph.D. & Alexandre Simoneau M.Sc. copyright 2019

Latest update: January 28 2021

This version is deprecated. Please refer to https://lx02.cegepsherbrooke.qc.ca/~aubema/index.php/Prof/IlluminaGuide2021 for the most up-to-date one.

News

New features

  • 07/01/2021 - Added direct irradiance calculation (W/m2/nm)
  • 21/12/2020 - Separation of the direct radiance calculation from the scattered to increase the computing speed of direct radiance. One added input parameter the scattering switch.
  • 10/07/2020 - Added VIIRS background removal support.
  • 08/05/2020 - Illumina is now able to look down toward the ground.
  • 08/05/2020 - The model now correct for the atmospheric extinction between the sources and the VIIRS satellite and also for the blocking effect of the obstacles to the signal detected by VIIRS-DNB.
  • 08/05/2020 - Adding the FOV angle for direct radiance calculation as an input of the input_params.in file.
  • 05/05/2020 - Calculation of the direct radiances instead of irradiance (W/m2/sr/nm)
  • 05/04/2020 - Adding the calculation of the direct irradiances from sources and reflecting surfaces
  • 28/03/2020 - Adding cloud fraction support to the calculation of overhead clouds radiance.

Bug fixes

  • 21/09/2020 - Correction of a bug in part of the topography blocking
  • 03/07/2020 - The angles calculations for the horizon masking of the direct radiances was bad.
  • 03/07/2020 - Correction of an error in the horizon blocking routine. In some cases, that effect was underestimated
  • 27/04/2020 - Extrapolation of reflectance values for wavalength lower than the minimum wavelength of the reflectance spectra. This correction only apply to calculation for wavelength lower than 420nm if you used the ASTER reflectance for asphalt.
  • 06/04/2020 - Correction of a bug with the reading of the mean distance between obstacles
  • 02/04/2020 - Correction of an important error related to topography screening : If you performed modeling experiment involving important topography, we highly suggest you to redo your calculations.

General informations

This users guide aim to help users of the ILLUMINA sky radiance model to prepare and manage their own simulations. We hope that the document is accurate enough but will be happy to improve it according to some difficulties you may encounter when using it. For any help please contact the PI Dr Martin Aubé (martin.aube at cegepsherbrooke.qc.ca). This guide is the most recent one that incorporate recently added features to the model like the hyperspectral support, the subgrid obstacles filling factor and the contribution of the cloud reflexion.

Optimal wavelength range

ILLUMINA cannot be used at any wavelength. Only the visible window to the universe is available. This limitation is mainly due to the fact that we neglect the molecular absorption features of the atmosphere. So we highly suggest to limit any analysis with ILLUMINA to the 330nm to 730nm range. Most of the lighting systems concentrate their emission in the allowed range. However if you want to extend this range after 730nm, you are likely to overestimate the sky radiance in that part of the spectrum. Note that if NIR emission is restricted to specific spectral lines, the impact of the atmospheric absorption features can be mitigated if the emission lines do not coincide exactly with the atmospheric absorption features. Please click on the image below to verify. Note also that some reflectance spectra are not defined below 420nm (e.g. asphalt) and then in such a situation, we will assume the nearest neighbour method to determine the reflectance for wavelength lower than that.

Prepared by Robert A. Rohde for the Global Warming Art project.

Operating system

ILLUMINA should be used with a computer running under Linux with Fortran and gcc compilers (e.g. gfortran) and pip versioning system installed. Many convenience scripts also require Python (3.8).

Other software dependencies

The following software are needed by the system

  • git
  • python
  • gfortran
  • python3-tk
  • gdal

Multiple python libraries are also needed. These can be installed and updated using pip.

  • pyproj
  • pyyaml
  • numpy
  • scipy
  • matplotlib
  • h5py
  • astropy
  • geopandas
  • pillow
  • gitpython

Other python libraries might be required. Install them as needed. We also suggest some libraries that are not needed but could be usefull.

  • ipython
  • gdal

In all cases, the most recent version of the code should be used. The code is evolving rapidly and then by updating your version frequently, you will benefit of new features and bug fixes.

Installing the code

The ILLUMINA model is available from github:

https://github.com/aubema/illumina

All sources codes are released under GNU public license.

To install the model from github, please follow these steps:

If you are a non developer user:

cd
mkdir git
cd git
git clone https://github.com/aubema/illumina.git

Then you must compile the code:

cd $HOME/git/illumina
bash makeILLUMINA

Then modify the $HOME/.bashrc file by typing the following commands in the terminal window. This will make the programs executable from anywhere on the file system.

echo "PATH=$PATH:$HOME/git/illumina/:$HOME/git/illumina/bin" >> $HOME/.bashrc
echo "export PYTHONPATH=$PYTHONPATH:$HOME/git/illumina/" >> $HOME/.bashrc
source $HOME/.bashrc

Preparing an execution

In order to execute the model, some data manipulation is needed to prepare it for the model. It is strongly recommended to separate the data from the code by creating a new directory somewhere on your computer and placing all the relevant data within.

When it is done, executing the init_run.py script will copy the necessary files to the working directory. The parameter files can then be modified to contain the correct values for your experiment.

Downloading and preparing the required satellite images

ILLUMINA requires some satellite data to run properly, namely a digital elevation model and the nocturnal light emittance. Theses data also need to be projected in a suitable spatial reference system and clipped to the simulation domain.

Domain definition

Defining the simulation domain is crucial in the input preparation, as it directly affects everything afterwards. The first step is to define the location(s) where the simulation of the artificial sky radiance is desired. Then, the projection needs to be defined, as the model need to work with coordinates in meters instead of degrees.

The parameters file named domain_params.in defines the domain. It should contain the following parameters:

latitude: 20.708
longitude: -156.257
srs: auto
nb_layers: 3
nb_pixels: 27
scale_min: 1000
scale_factor: 3
buffer: 10
  • latitude and longitude are the coordinates of the observer. These two parameters can also be lists (of the format [lat1, lat2, ...]) for multiple observing locations.
  • srs is the spatial reference system used by the model. Setting it to auto' lets the model select an appropriate one automatically, but it can also be selected manually by writing the corresponding EPSG code (for instance epsg:3561'').
  • The domain is defined by a series of overlapping layers with different scale. This allows to describe vast domains while keeping a high resolution close to the observer. The last 5 parameters describe theses layers. (1)
    • nb_layers is the number of these layers,
    • nb_pixel is width of the domain in number of pixel. It will be the same in each layer (2),
    • scale_min is the resolution in meters of the smallest layer, We strongly suggest not to use a value lower than 20 m. This is a hard coded limitation in the illumina kernel. If you select a smaller value, the model will make the computation but we cannot confirm that the results will be good.
    • scale_factor is the ratio between the dimension of pixel of successive layers and
    • buffer is the size of the buffer (in kilometers) around each layer to properly consider each possible optical path the light can use to reach the observer. This is mainly useful to allow the calculation of the interaction of photons reflecting on the ground or 2nd order of scattering photons blocking by the elevation model. Since in the model, the 2nd order of scattering is calculated up to 100 km away from the source and from the line of sight, a buffer of at least 100 km wide would be ideal. However for most resolutions such a huge buffer isn't realistic. The native size of the domain for any modelling scale is 512 x 512 pixels. If the buffer exceed this size (i.e. nb_pixel+2xbuffer>512), the maximal value of the buffer will correspond to the domain size, not the requested buffer.

The defineDomain.py script is then used to generate the domain.ini file containing the details of the defined layers. It will print the geometric properties of each layer so you can validate that the dimensions are reasonable. We suggest a largest domain size of around 300-600 km. The script also prints the coordinates of the south-west and the north-east corners. Theses are useful for bounding the domain to download only the relevant satellite imagery in the following steps.

(1) Note that defineDomain.py can be called as often as required until you are satisfied with the layers/domain properties.

(2) As a rule of thumb, we suggest not to exceed 255 for that number.

VIIRS-DNB imagery

The night emittance is obtained from VIIRS-DNB imagery that can be found here. One should download the appropriate tile(s) for the period desired (year and month). We suggest the VCMSLCFG configuration in the monthly composite product because of the stray light correction, but the choice is left to the user. You will want to extract the 'avg_rade9.tif' file as it's the actual values, whereas the 'cf_cvg.tif' file contain information related to the quality of the image. The tif file(s) should be placed inside a subfolder named VIIRS-DNB inside your experiment directory.

It is also possible to use the VIIRS background method proposed by Coesfeld et al. (2020) for more accurate results. In that case, the VCMCFG product needs to be used instead, and the correction data needs to be downloaded from here and decompressed in the VIIRS-DNB folder.

Watermask

When used with VIIRS-DNB input, the model need a water mask to calculate properly the light fluxes. You can download it here and save it to the experiment folder.

SRTM data

The digital elevation model is made with the SRTM elevation data that can be found here. One should use the spatial filter to select only the required tiles, and then follow the download procedure. The extracted hgt files should be placed inside a subfolder named SRTM inside your experiment directory.

To extract multiple archives at once, one can use unzip "*.zip"

Processing the input images with Illuminutils.py

The Illuminutils.py script should be executed from the experiment directory containing the two data subfolders explained above.

Two files should be produced by this script:

  • stable_lights.hdf5
  • srtm.hdf5
VIIRSSRTM
stable_lights.binsrtm.bin

Sample files for Hawaii

The standard format used by ILLUMINA is HDF version 5. Theses can be visualized using tools like hdfcompass. We also provide convenience python functions in the hdftools package included with ILLUMINA.

Converting other datasets

Illuminutils.py can also be used to convert other datasets to the Illumina format for a specified simulation domain. As long as the domain.ini file is in the current working directory, the script can be called as

Illuminutils.py NAME FILELIST

where NAME will be the name of the output file (without the extension) and file list is a list of one or multiple files (the use of bash wildcards is recommended here) to be warped. Note that all the files will be warped togheter so they should be tiles of the same dataset. The supported formats are the ones that can be processed by GDAL.

Making light inventory

In order to model the propagation of the light, the properties of the light sources must be defined. There are two way to do this for ILLUMNA: 1- using VIIRS-DNB spaceborne radiance monthly product, 2- using a point source inventory. Both of them can be used together, as long as they are not overlapping. There can not be poing sources where in a pixel already containing sources derived with VIIRS-DNB.

Using uniform overlapping circular zones

The first way is to define overlapping circular zones of uniform properties. Theses zones are defined by a their center point and a radius and specify the mix of lamps assumed to be present in that area (different by their photometry function or light output pattern (LOP), their spectrum, lamp height) as well as the average distance between obstacles, obstacles height and obstacle filling factor. Two or more zones may be in the same geographical region or partly overlapping. Each new zone overwrite the previous in case of intersection between the zones. All the points that are not included in a zone will be ignored. To create a zone you have to edit an ASCII file with a simple text editor like kwrite or gedit following the format shown below:

Sample inventory file for the Hawaii territory

# lat   lon        R    hobs    dobs    fobs    hlamp   Zone inventory          Comment
21.4474 -157.9712  50   7       25      0.5     7       90_H_5  10_M_10         # Oahu
21.0052 -157.0123  40   7       25      0.5     7       90_H_5  10_M_10         # Molokai + Lanai
20.7764 -156.1512  64   7       25      0.5     7       18_H_10 72_H_0  10_M_10 # Maui
19.6468 -155.5714  103  7       25      0.5     7       87_L_10 8_H_10  5_M_5   # Big Island
19.2878 -155.2179  23   7       25      0.5     7       0_L_0                   # Lava

This file can have any number of header lines as long as they are preceded by a '#' symbol. Anything on the same line following this symbol will not be considered by the model. Each data line contains several parameters:

  1. lat: central latitude of the circular zone.
  2. lon: central longitude of the circular zone.
  3. R: radius of the circular zone (in kilometers).
  4. hobs: subgrid averaged obstacles height (in meters)
  5. dobs: averaged distance between subgrid obstacles (in meters)
  6. fobs: obstacle filling factor i.e. probability for a photon to hit an obstacle (0. to 1.)
  7. hlamp: averaged lamps height relative to the ground (in meters)
  8. list of lamps characteristics

Each lamp characteristics is composed of three fields separated by '_'.

  • The first field is the weight of the zone that is defined by the following two characteristics. The weight is later converted to a ratio by normalizing to the sum of all the weight for that zone.
  • The second is a reference word (3) corresponding to the spectral power distribution of the lamp.
  • The third is a reference word (3) corresponding to the angular power distribution of the lamp or light ouput pattern (LOP)(3).

The weighting is applied on the luminous flux of the spectral power distribution of the lamp in lumen. This means that the spectrums are weighted by the photopic sensitivity curve.

As can be seen with the last zone of the example, a zone can have a weight of 0. In that case, the pixels associated with it will be discarded as is they where not in a zone.

Example


If one zone is composed of 50% of HPS with the angular photometry toto1_fctem.dat, 20% of HPS with phtotometry file toto2_fctem.dat, and 30% of LED4000K with photometry file toto3_fctem.dat and let assume that you use the spectral power distributions provided in the Example/Lights directory HPS_Helios.spct and 4LED_LED-4000K-Philips.spct to create the light inventory.

Then you should write 50_HPS_toto1 20_HPS_toto2 30_4LED_toto3 at column 8 of your inventory file for that zone.

The data referenced by the last two fields must be located in a subfolder named 'Lights'. This folder must contain the following files in addition to the ones used to define the lamp inventory :

  • photopic.dat
  • scotopic.dat
  • viirs.dat

Theses files can be found in the ILLUMINA installation directory (Examples/Lights). Any additional file used to characterize the lamp must follow the following format :

  • Angular light output pattern (LOP) files must have the extension '.lop'. They are made of two columns ASCII data where the first column is the relative intensity and the second is the zenital angle in degree. The lop file must contain 181 data starting at z=0 to z=180 at 1 deg. step.
  • spectral files must have the extension '.spct'. They are two columns ASCII data files with a single line header. The first column contains the wavelength in nm and the second contains the relative intensity.

The normalization of all theses files is not important, as it will be done by the programs.

(3)In all cases, any characters preceding the first underscore (_) in the lop or spct file name is the reference word that must be written in the inventory file.

All .spct files must have the same wavelength scale. All LOP Files must share the same angle scale.

Using a discrete light source inventory

The second way to describe the lights is to directly specify their properties on a lamp-by-lamb basis. In this case, the file needs to have the following format:

# lat lon pow hobs dobs fobs hlamp spct lop
21.295693 -157.856846 250 20 25 0.9 7 MH 5
21.295776 -157.856782 150 20 25 0.9 7 LED 0
21.295844 -157.857114 100 30 30 0.85 7 MH 5
21.286488 -157.845900 100 50 10 0.3 10 LED 1

where

  1. lat: Latitude of the light source
  2. lon: Longitude of the light source
  3. pow: Intensity of the source in lumen
  4. hobs: Averaged obstacles height (in meters)
  5. dobs: Averaged distance between obstacles (in meters)
  6. fobs: Obstacle filling factor i.e. probability for a photon to hit an obstacle (0. to 1.)
  7. hlamp: Light source height relative to the ground (in meters)
  8. spct: Spectral power distribution keyword
  9. lop: Angular power distribution keyword

It is possible to use both methods simultaneously, but in that case all the discrete light sources must fall outside of the zones or inside one with a weight of 0.

Defining the experiment

The execution mode

You may be interested to run ILLUMINA for many reasons. By default, ILLUMINA will calculate the artificial diffuse radiance, the part that is produced by the clouds, the direct radiance reaching the observer from a sight to the sources and the direct radiance coming from a sight to reflecting surfaces. If you are more interested by the direct radiances, it may be a good idea to increase to the maximum the resolution near the observer. The calculation of the direct radiance inside the mean free path toward obstacles will not experience any obstacle blocking. The blocking by obstacles only occur when the observer is farther than the mean free path to the ground. This parameter is defined when you specified the "subgrid obstacle properties" with the variable dobs. Actually dobs is twice the value of the mean free path. If you are not interested to obtain the sky or cloud radiances, but only the direct radiance, then you can speedup the calculation by setting off the scattering.

Create the input parameters file

The parameters used by the model for executing the experiment are contained in the input_params.in file, as described below:

# input parameters
exp_name: Hawaii                 # base name of the experiment (use whatever you want)
zones_inventory: inventory.txt   # VIIRS=DNB derived inventory
lamps_inventory:                 # point source inventory
nb_bins: 5                       # numver of spectral bins
lambda_min: 380
lambda_max: 830
reflectance:                     # weighting of basic ASTER reflectance spectra
    asphalt: 0.8
    grass: 0.2
    snow: 0
aerosol_profile: maritime        # Aerosol profile. 'maritime','urban' or 'rural'
relative_humidity: 70            # 0, 50, 70, 80, 90, 95m 98, 99
estimated_computing_time: 1      # estimated computing time per case in hours
batch_file_name: batch
# parameters after here can be lists
observer_elevation: 10           # elevation above ground level (m)
air_pressure: 101.3              # lowest domain level atmospheric pressure in KPa
reflection_radius: 9.99          # radius around light sources whre reflections can occur (m)
cloud_model: 0                   # cloud model selection 0=clear, 1=Thin Cirrus/Cirrostratus, 2=Thick Cirrus/Cirrostratus, 3=Altostratus/Altocumulus,  4=Stratocumulus, 5=Cumulus/Cumulonimbus
cloud_base: 0                    # height of the cloud base (m)
cloud_fraction: 0                # Cloud cover fraction (0-100)
stop_limit: 5000.                # Stop computation when the new voxel contribution is less than 1/stoplim of the cumulated flux (suggested value = 5000.)
double_scattering: True          # Activate double scattering (True/False)
single_scattering: True          # Activate single scattering (True/False)
elevation_angle: [90,45]
azimuth_angle: [0,60,120,180,240,300]
direct_fov: 5                    # field of view for the direct radiance calculations
aerosol_optical_depth: 0.11      # AOD value at 500 nm
angstrom_coefficient: 0.7        # angstrom exponent value

The reflective surface types are ASTER files located in the `Lights` folder.

  • The contribution of the 2nd scattering is very important when sources are far away from the observer. It can rise up to 60% of the sky brightness. For nearby sources 2nd scattering typically contribute to a few percent (less than 10%). In such case you may want to deactivate this feature to save computing time.
  • For an elevation angle of 90 degrees (as we defined in the example above), all azimuth angles are degenerated. The make_input.py script (see next section) will know that, and therefore will only run one case for this specific elevation.
  • If you plan to produce à panorama view of the direct radiance, you need to be sure that your angular viewing mesh grid is smaller that the direct_fov value. E.g. if you define direct_fov=5, you will need the following grid:
    • elevation_angle: [2,6,10,14,18,22,26,30,34,38,42,46,50,54,58,62,66,70,74,78,82,86,90]
    • azimuth_angle: [2,6,10,14,18,22,26,30,34,38,42,46,50,54,58,62,66,70,74,78,82,86,90,
    94,98,102,106,110,114,118,122,126,130,134,138,142,146,150,154,158,162,166,170,174,
    9178,182,186,190,194,198,202,206,210,214,218,222,226,230,234,238,242,246,250,254,
    9258,262,266,270,274,278,282,286,290,294,298,302,306,310,314,318,322,326,330,334,
    9338,342,346,350,354,358]

Run make_inputs script

Once all the data is obtained as the input parameter file is created, the make_inputs.py script is used to prepare the data for the model.

Quick file check

The script should produce a directory named 'Inputs'. It should contain:

  • N * W fctem_wl_WWW_zon_NNN.dat files, where N is the number of zones and W the number of wavelength used. In the case of the example, both these numbers are 5, and so you should have 25 fctem files. This is the angular emission informations.
  • You should also have lumlp files, one for each wavelength and zone combination plus one for each wavelength, giving the global view. lumlp files are giving the total lamp spectral flux for any pixel of a zone at the given wavelength. Note that this flux is not corrected for the atmospheric attenuation and obstacles blocking between sources and the satellite. Such a correction will be done later on while running the model.
  • There also must be a file named exp_name_altlp.hdf5 . This is the lamp height relative to the ground.
  • A file named exp_name_obstd.hdf5 . This is the distance between obstacles.
  • A file named exp_name_obsth.hdf5 . This is the height of obstacles.
  • A file named exp_name_obstf.hdf5 . This is the obstacle filling factor.
  • A file named origin.hdf5 . This is the a flag array telling if the inventory was determine with VIIRS or from point sources.
  • You should also find two .lst files.
  • In addition to all that, you should see some symbolic links, one for integration_limits.dat, one for srtm.hdf5 and W .mie.out files, one for each wavelength W.

lumlp files are in units of W/nm.

Example run at 605nm with contrast boosted

Total lumlpZone 1 = Oahu
Zone 2 = Molokai and LanaiZone 3 = Maui
Zone 4 = Big IslandMap of the zones made with free map tools

Zone 5, wich is the lava lakes, is all black because its light isn't considered in the model. To achieve this, its lamp inventory was empty. You can verify that in the 'inventory.txt' file.

Alternative scenarios

You may be interested in simulating alternative scenarios based on the current situation. For example, artificially replacing all light sources to a new photometry. This is done with the alt_scenario_maker.py script. Help on that script is available by calling

alt_scenario_maker.py -h

If used with an alternative zones inventory, a replacement inventory needs to be in the same directory and contain a set of lamp characteristics for each zone. For example,

1_AMBR_0        # Oahu
1_AMBR_0        # Molokai + Lanai
1_AMBR_0        # Maui
1_AMBR_0        # Big Island
0_L_0           # Lava

If used with an alternative lamps inventory, a replacement inventory needs to be in the same directory and contain a set of lamp characteristics in the same format as the initial inventory.

The script will then generate a folder named Inputs_NAME containing the relevant data.

Submitting the calculations to a Linux cluster

To perform the calculations, we now connect to a 'Cluster'. In our case, we connected to 'Mammouth serial II' located at Université de Sherbrooke. The task scheduler used is Slurm. You may need to manually adjust some files to match the execution environment you are using.

Then it is necessary to recompile the ILLUMINA model using 'makeILLUMINA' or through the following command:

bash makeILLUMINA

The `Inputs` directory created for each experiment in step 6 should now be transferred to the cluster interactive node via the scp protocol.

Preparing the batch execution

Now we need to run the program makeBATCH.py that will prepare the execution folder for each calculation directory on the cluster. This must be done from the Inputs folder(s). The documentation of the function is available by calling makeBATCH.py -h. Note that files with names conflicting with the batch name provided either at the command line or in the 'input_params.in' file will be removed prior to executing. If you want to prepare multiple execution, make sure that they have different batch names.

On a Slurm cluster, you may use theses commands to keep an eye on and manage the executions.

  • Use the command squeue -u $USER to verify the status of the 'clusters' (compute nodes) before or during execution.
  • To delete a task, use the scancel followed by the job number to delete.
  • To delete all your jobs use the following command: scancel -u $USER

To execute the calculations, simply execute the bash file(s) produced by the makeBATCH.py script.

Find failed calculations

In many cases you will probably have a lot of calculations to be done to complete your modeling experiment. Each calculation going to a given core and/or node (if you run on a cluster). Then for some reasons there is some chance that some of your calculations can fail. Finding the failed calculations can be a difficult task. For that reason we provided a script called find-failed-runs.py. All you need is to wait for all calculations to finish and then go to the experiment folder and run the script. The script will show the path of folders containing failed runs. If you run it with the -e option, the scritp will generate the code to launch the failed runs. You should probably want to store it in a file and then run it as a bash script.

find-failed-runs.py -e > your_final_run.bash

Then simply start the aborted runs by running this script

bash ./your_final_run.bash

Note that the script is assuming that you are using a system running slurm. You will see in the script that the execution begin by sbatch. If you are not using slurm, then just remove «sbatch --time=XX:XX:XX» from the script. In such a case you will also probably need split the file into many excution script to be sure that you will not use too much RAM memory. You can use the unit split command for that.

Extracting results

ILLUMINA generates two different output per calculation:

  1. An image file showing the relative contribution of each pixel to the calculated diffuse radiance. We call this image the PCL file or the contribution map.
  2. The numerical values for the Total diffuse radiance, the cloud radiance, the direct radiance from sources and the direct radiance from reflection surfaces.

To extract the data, the extract-output-data.py script is used. It can extract either the value of the diffuse radiance or all available components. It can also extract the contribution maps. Moreover, filters are available to only process certain parameter values. Documentation is available by calling

extract-output-data.py -h

The script will output the data directly, so it should be redirected to a file with

extract-output-data.py > results.txt

If you are not only interested in the total diffuse radiance (clouds + preceeding atmosphere) and want also to extract the cloud contribution to the radiance and the direct and direct reflected radiance, you will neet to run the script in the full mode.

The script will output the data directly, so it should be redirected to a file with

extract-output-data.py -f > results.txt

There will be a column for each radiometric value.

As stated in the documentation, the contribution map can be extracted using the `-c` flag.

extract-output-data.py -c > results.txt

Units of the radiances are W/sr/m^2/nm.

To get the radiance of a spectral bin, one must multiply the radiance delivered by Illumina with the bandwidth (in nm).

Units of the irradiances are W/m^2/nm

To get the irradiance of a spectral bin, one must multiply the irradiance delivered by Illumina with the bandwidth (in nm).

PCL binary files (XXXXX_pcl.bin) do not have any units. The values represents the fractional contribution of a pixel to the total diffuse radiance. The sum of all pixels gives 1.

PCL files at different resolution are combined into a HDF5 file to create the total diffuse radiance contribution file in units of W/sr/m^2/nm. These files shoud be named the following way: elevation_angle_XX-azimuth_angle_YY-wavelength_ZZZ.Z.hdf5

Analysing the results

The analysis can be done with your favorite tools. We strongly recommend the use of python and provide convenience functions in the pytools and hdftools packages provided with ILLUMINA.

Transforming to magnitude per arc sq seconds (for astronomers...)

Transforming diffuse radiances to sky brightnesses (SB) in units of mag/sqarcsec is not an simple task. First of all you have to consider that illumina is only dealing with the artificial component of the SB. If you are using illumina in a relatively dark site, the artificial SB can represents only a small part of the total SB. To transform radiance to total SB, you will need a relevant estimate of the natural contribution to the total SB. The natural SB is highly variable with time, altitude, season, observing direction etc. It is composed of many sources like the zodiacal light, the starlight, the sky glow, the Milky Way etc. Given that complexity, we suggest to determine it experimentally for the modeled site and period you are interested in. To do it, you need an in situ measurement of the total SB from which you will be able to extract the natural component and then eventually consider this component as a constant natural contribution to the SB for your specific site and period, no matter the viewing angle or light inventory, obstacles properties etc. Lets call the radiance responsible for that natural contribution the background radiance ({#R_{bg}#}). Lets assume that you have an in situ measurement of the total Johnson-Cousins SB. You need to accomplish the following steps to convert your artificial modeled radiance to total SB.

Integrate your radiances {#{R_a}#} and according to the Johnson-Cousins filter. {#{R_a}#} being the modeled radiance you want to convert to {# SB #}. The sensitivity curve of Johnson-Cousins filters are provided in the Example/Lights folder (e.g. JC_V.dat).

{#R_{bg}#} the radiance corresponding to the natural level of the sky brightness ({#SB_{bg}#}) can be estimated using the SB data provided by Benn & Elison 1998, except for the R band that was taken from La Palma P99 ASTMON (2018-2019) measurements. We simply added 0.03 from the B and V values provided for La Palma (as recommended by Benn & Elison 1998) and then used the formula below to calculate the radiance.

Johnson band{#SB_{bg}#}{#R_{bg}#}
 {#mag / arcsec^2#}{#W m^{-2} sr^{-1}#}
U22.031.72E-07
B22.732.05E-07
V21.932.22E-07
R21.185.10E-07
I20.037.65E-07

{## R_{bg} = R_0 10^{-0.4 SB_{bg}} ##}

For a value of modeled artificial radiance ({#{R_a}#}), use the following formulae to convert to total SB:

{## SB = -2.5 log10 \left( \frac{{R_a} + R_{bg}}{R_0} \right) ##}

{#R_0#} are derived from zero points obtained by Bessell 1979 calibration (DOI 10.1086/130542) and given in the table below.

Johnson band{#R_0#}
 W sr-1 m-2
U111.8
B254.3
V131.4
R151.2
I78.7

Quick git using guide for developers

To publish your locally modified codes on the server

git commit -u graphycs1 -m "new printout"
git push origin master

To update your local version of the codes

git pull
Edit - History - Print - Recent Changes - Search
Page last modified on May 04, 2022, at 04:12 pm UTC