Menu |
Prof /
IlluminaGuide2019ILLUMINA v2.0 USER'S GUIDEMartin Aubé, Ph.D. & Alexandre Simoneau M.Sc. copyright 2019 Latest update: January 28 2021 This version is deprecated. Please refer to https://lx02.cegepsherbrooke.qc.ca/~aubema/index.php/Prof/IlluminaGuide2021 for the most up-to-date one. NewsNew features
Bug fixes
General informationsThis users guide aim to help users of the ILLUMINA sky radiance model to prepare and manage their own simulations. We hope that the document is accurate enough but will be happy to improve it according to some difficulties you may encounter when using it. For any help please contact the PI Dr Martin Aubé (martin.aube at cegepsherbrooke.qc.ca). This guide is the most recent one that incorporate recently added features to the model like the hyperspectral support, the subgrid obstacles filling factor and the contribution of the cloud reflexion. Optimal wavelength rangeILLUMINA cannot be used at any wavelength. Only the visible window to the universe is available. This limitation is mainly due to the fact that we neglect the molecular absorption features of the atmosphere. So we highly suggest to limit any analysis with ILLUMINA to the 330nm to 730nm range. Most of the lighting systems concentrate their emission in the allowed range. However if you want to extend this range after 730nm, you are likely to overestimate the sky radiance in that part of the spectrum. Note that if NIR emission is restricted to specific spectral lines, the impact of the atmospheric absorption features can be mitigated if the emission lines do not coincide exactly with the atmospheric absorption features. Please click on the image below to verify. Note also that some reflectance spectra are not defined below 420nm (e.g. asphalt) and then in such a situation, we will assume the nearest neighbour method to determine the reflectance for wavelength lower than that. Prepared by Robert A. Rohde for the Global Warming Art project. Operating systemILLUMINA should be used with a computer running under Linux with Fortran and gcc compilers (e.g. gfortran) and pip versioning system installed. Many convenience scripts also require Python (3.8). Other software dependenciesThe following software are needed by the system
Multiple python libraries are also needed. These can be installed and updated using pip.
Other python libraries might be required. Install them as needed. We also suggest some libraries that are not needed but could be usefull.
In all cases, the most recent version of the code should be used. The code is evolving rapidly and then by updating your version frequently, you will benefit of new features and bug fixes. Installing the codeThe ILLUMINA model is available from github: https://github.com/aubema/illumina All sources codes are released under GNU public license. To install the model from github, please follow these steps: If you are a non developer user: cd mkdir git cd git git clone https://github.com/aubema/illumina.git Then you must compile the code: cd $HOME/git/illumina bash makeILLUMINA Then modify the $HOME/.bashrc file by typing the following commands in the terminal window. This will make the programs executable from anywhere on the file system. echo "PATH=$PATH:$HOME/git/illumina/:$HOME/git/illumina/bin" >> $HOME/.bashrc echo "export PYTHONPATH=$PYTHONPATH:$HOME/git/illumina/" >> $HOME/.bashrc source $HOME/.bashrc Preparing an executionIn order to execute the model, some data manipulation is needed to prepare it for the model. It is strongly recommended to separate the data from the code by creating a new directory somewhere on your computer and placing all the relevant data within. When it is done, executing the init_run.py script will copy the necessary files to the working directory. The parameter files can then be modified to contain the correct values for your experiment. Downloading and preparing the required satellite imagesILLUMINA requires some satellite data to run properly, namely a digital elevation model and the nocturnal light emittance. Theses data also need to be projected in a suitable spatial reference system and clipped to the simulation domain. Domain definitionDefining the simulation domain is crucial in the input preparation, as it directly affects everything afterwards. The first step is to define the location(s) where the simulation of the artificial sky radiance is desired. Then, the projection needs to be defined, as the model need to work with coordinates in meters instead of degrees. The parameters file named domain_params.in defines the domain. It should contain the following parameters: latitude: 20.708 longitude: -156.257 srs: auto nb_layers: 3 nb_pixels: 27 scale_min: 1000 scale_factor: 3 buffer: 10
The defineDomain.py script is then used to generate the domain.ini file containing the details of the defined layers. It will print the geometric properties of each layer so you can validate that the dimensions are reasonable. We suggest a largest domain size of around 300-600 km. The script also prints the coordinates of the south-west and the north-east corners. Theses are useful for bounding the domain to download only the relevant satellite imagery in the following steps. (1) Note that defineDomain.py can be called as often as required until you are satisfied with the layers/domain properties. (2) As a rule of thumb, we suggest not to exceed 255 for that number. VIIRS-DNB imageryThe night emittance is obtained from VIIRS-DNB imagery that can be found here. One should download the appropriate tile(s) for the period desired (year and month). We suggest the VCMSLCFG configuration in the monthly composite product because of the stray light correction, but the choice is left to the user. You will want to extract the 'avg_rade9.tif' file as it's the actual values, whereas the 'cf_cvg.tif' file contain information related to the quality of the image. The tif file(s) should be placed inside a subfolder named VIIRS-DNB inside your experiment directory. It is also possible to use the VIIRS background method proposed by Coesfeld et al. (2020) for more accurate results. In that case, the VCMCFG product needs to be used instead, and the correction data needs to be downloaded from here and decompressed in the VIIRS-DNB folder. WatermaskWhen used with VIIRS-DNB input, the model need a water mask to calculate properly the light fluxes. You can download it here and save it to the experiment folder. SRTM dataThe digital elevation model is made with the SRTM elevation data that can be found here. One should use the spatial filter to select only the required tiles, and then follow the download procedure. The extracted hgt files should be placed inside a subfolder named SRTM inside your experiment directory. To extract multiple archives at once, one can use Processing the input images with Illuminutils.pyThe Illuminutils.py script should be executed from the experiment directory containing the two data subfolders explained above. Two files should be produced by this script:
Sample files for Hawaii The standard format used by ILLUMINA is HDF version 5. Theses can be visualized using tools like hdfcompass. We also provide convenience python functions in the hdftools package included with ILLUMINA. Converting other datasetsIlluminutils.py can also be used to convert other datasets to the Illumina format for a specified simulation domain. As long as the domain.ini file is in the current working directory, the script can be called as Illuminutils.py NAME FILELIST where NAME will be the name of the output file (without the extension) and file list is a list of one or multiple files (the use of bash wildcards is recommended here) to be warped. Note that all the files will be warped togheter so they should be tiles of the same dataset. The supported formats are the ones that can be processed by GDAL. Making light inventoryIn order to model the propagation of the light, the properties of the light sources must be defined. There are two way to do this for ILLUMNA: 1- using VIIRS-DNB spaceborne radiance monthly product, 2- using a point source inventory. Both of them can be used together, as long as they are not overlapping. There can not be poing sources where in a pixel already containing sources derived with VIIRS-DNB. Using uniform overlapping circular zonesThe first way is to define overlapping circular zones of uniform properties. Theses zones are defined by a their center point and a radius and specify the mix of lamps assumed to be present in that area (different by their photometry function or light output pattern (LOP), their spectrum, lamp height) as well as the average distance between obstacles, obstacles height and obstacle filling factor. Two or more zones may be in the same geographical region or partly overlapping. Each new zone overwrite the previous in case of intersection between the zones. All the points that are not included in a zone will be ignored. To create a zone you have to edit an ASCII file with a simple text editor like kwrite or gedit following the format shown below: Sample inventory file for the Hawaii territory# lat lon R hobs dobs fobs hlamp Zone inventory Comment 21.4474 -157.9712 50 7 25 0.5 7 90_H_5 10_M_10 # Oahu 21.0052 -157.0123 40 7 25 0.5 7 90_H_5 10_M_10 # Molokai + Lanai 20.7764 -156.1512 64 7 25 0.5 7 18_H_10 72_H_0 10_M_10 # Maui 19.6468 -155.5714 103 7 25 0.5 7 87_L_10 8_H_10 5_M_5 # Big Island 19.2878 -155.2179 23 7 25 0.5 7 0_L_0 # Lava This file can have any number of header lines as long as they are preceded by a '#' symbol. Anything on the same line following this symbol will not be considered by the model. Each data line contains several parameters:
Each lamp characteristics is composed of three fields separated by '_'.
The weighting is applied on the luminous flux of the spectral power distribution of the lamp in lumen. This means that the spectrums are weighted by the photopic sensitivity curve. As can be seen with the last zone of the example, a zone can have a weight of 0. In that case, the pixels associated with it will be discarded as is they where not in a zone. Example If one zone is composed of 50% of HPS with the angular photometry toto1_fctem.dat, 20% of HPS with phtotometry file toto2_fctem.dat, and 30% of LED4000K with photometry file toto3_fctem.dat and let assume that you use the spectral power distributions provided in the Example/Lights directory HPS_Helios.spct and 4LED_LED-4000K-Philips.spct to create the light inventory. Then you should write 50_HPS_toto1 20_HPS_toto2 30_4LED_toto3 at column 8 of your inventory file for that zone. The data referenced by the last two fields must be located in a subfolder named 'Lights'. This folder must contain the following files in addition to the ones used to define the lamp inventory :
Theses files can be found in the ILLUMINA installation directory (Examples/Lights). Any additional file used to characterize the lamp must follow the following format :
The normalization of all theses files is not important, as it will be done by the programs. (3)In all cases, any characters preceding the first underscore (_) in the lop or spct file name is the reference word that must be written in the inventory file. All .spct files must have the same wavelength scale. All LOP Files must share the same angle scale. Using a discrete light source inventoryThe second way to describe the lights is to directly specify their properties on a lamp-by-lamb basis. In this case, the file needs to have the following format: # lat lon pow hobs dobs fobs hlamp spct lop 21.295693 -157.856846 250 20 25 0.9 7 MH 5 21.295776 -157.856782 150 20 25 0.9 7 LED 0 21.295844 -157.857114 100 30 30 0.85 7 MH 5 21.286488 -157.845900 100 50 10 0.3 10 LED 1 where
It is possible to use both methods simultaneously, but in that case all the discrete light sources must fall outside of the zones or inside one with a weight of 0. Defining the experimentThe execution modeYou may be interested to run ILLUMINA for many reasons. By default, ILLUMINA will calculate the artificial diffuse radiance, the part that is produced by the clouds, the direct radiance reaching the observer from a sight to the sources and the direct radiance coming from a sight to reflecting surfaces. If you are more interested by the direct radiances, it may be a good idea to increase to the maximum the resolution near the observer. The calculation of the direct radiance inside the mean free path toward obstacles will not experience any obstacle blocking. The blocking by obstacles only occur when the observer is farther than the mean free path to the ground. This parameter is defined when you specified the "subgrid obstacle properties" with the variable dobs. Actually dobs is twice the value of the mean free path. If you are not interested to obtain the sky or cloud radiances, but only the direct radiance, then you can speedup the calculation by setting off the scattering. Create the input parameters fileThe parameters used by the model for executing the experiment are contained in the input_params.in file, as described below: # input parameters exp_name: Hawaii # base name of the experiment (use whatever you want) zones_inventory: inventory.txt # VIIRS=DNB derived inventory lamps_inventory: # point source inventory nb_bins: 5 # numver of spectral bins lambda_min: 380 lambda_max: 830 reflectance: # weighting of basic ASTER reflectance spectra asphalt: 0.8 grass: 0.2 snow: 0 aerosol_profile: maritime # Aerosol profile. 'maritime','urban' or 'rural' relative_humidity: 70 # 0, 50, 70, 80, 90, 95m 98, 99 estimated_computing_time: 1 # estimated computing time per case in hours batch_file_name: batch # parameters after here can be lists observer_elevation: 10 # elevation above ground level (m) air_pressure: 101.3 # lowest domain level atmospheric pressure in KPa reflection_radius: 9.99 # radius around light sources whre reflections can occur (m) cloud_model: 0 # cloud model selection 0=clear, 1=Thin Cirrus/Cirrostratus, 2=Thick Cirrus/Cirrostratus, 3=Altostratus/Altocumulus, 4=Stratocumulus, 5=Cumulus/Cumulonimbus cloud_base: 0 # height of the cloud base (m) cloud_fraction: 0 # Cloud cover fraction (0-100) stop_limit: 5000. # Stop computation when the new voxel contribution is less than 1/stoplim of the cumulated flux (suggested value = 5000.) double_scattering: True # Activate double scattering (True/False) single_scattering: True # Activate single scattering (True/False) elevation_angle: [90,45] azimuth_angle: [0,60,120,180,240,300] direct_fov: 5 # field of view for the direct radiance calculations aerosol_optical_depth: 0.11 # AOD value at 500 nm angstrom_coefficient: 0.7 # angstrom exponent value The reflective surface types are ASTER files located in the `Lights` folder.
Run make_inputs scriptOnce all the data is obtained as the input parameter file is created, the make_inputs.py script is used to prepare the data for the model. Quick file checkThe script should produce a directory named 'Inputs'. It should contain:
lumlp files are in units of W/nm. Example run at 605nm with contrast boosted
Zone 5, wich is the lava lakes, is all black because its light isn't considered in the model. To achieve this, its lamp inventory was empty. You can verify that in the 'inventory.txt' file. Alternative scenariosYou may be interested in simulating alternative scenarios based on the current situation. For example, artificially replacing all light sources to a new photometry. This is done with the alt_scenario_maker.py script. Help on that script is available by calling alt_scenario_maker.py -h If used with an alternative zones inventory, a replacement inventory needs to be in the same directory and contain a set of lamp characteristics for each zone. For example, 1_AMBR_0 # Oahu 1_AMBR_0 # Molokai + Lanai 1_AMBR_0 # Maui 1_AMBR_0 # Big Island 0_L_0 # Lava If used with an alternative lamps inventory, a replacement inventory needs to be in the same directory and contain a set of lamp characteristics in the same format as the initial inventory. The script will then generate a folder named Inputs_NAME containing the relevant data. Submitting the calculations to a Linux clusterTo perform the calculations, we now connect to a 'Cluster'. In our case, we connected to 'Mammouth serial II' located at Université de Sherbrooke. The task scheduler used is Slurm. You may need to manually adjust some files to match the execution environment you are using. Then it is necessary to recompile the ILLUMINA model using 'makeILLUMINA' or through the following command: bash makeILLUMINA The `Inputs` directory created for each experiment in step 6 should now be transferred to the cluster interactive node via the scp protocol. Preparing the batch executionNow we need to run the program makeBATCH.py that will prepare the execution folder for each calculation directory on the cluster. This must be done from the Inputs folder(s). The documentation of the function is available by calling makeBATCH.py -h. Note that files with names conflicting with the batch name provided either at the command line or in the 'input_params.in' file will be removed prior to executing. If you want to prepare multiple execution, make sure that they have different batch names. On a Slurm cluster, you may use theses commands to keep an eye on and manage the executions.
To execute the calculations, simply execute the bash file(s) produced by the makeBATCH.py script. Find failed calculationsIn many cases you will probably have a lot of calculations to be done to complete your modeling experiment. Each calculation going to a given core and/or node (if you run on a cluster). Then for some reasons there is some chance that some of your calculations can fail. Finding the failed calculations can be a difficult task. For that reason we provided a script called find-failed-runs.py. All you need is to wait for all calculations to finish and then go to the experiment folder and run the script. The script will show the path of folders containing failed runs. If you run it with the -e option, the scritp will generate the code to launch the failed runs. You should probably want to store it in a file and then run it as a bash script. find-failed-runs.py -e > your_final_run.bash Then simply start the aborted runs by running this script bash ./your_final_run.bash Note that the script is assuming that you are using a system running slurm. You will see in the script that the execution begin by sbatch. If you are not using slurm, then just remove «sbatch --time=XX:XX:XX» from the script. In such a case you will also probably need split the file into many excution script to be sure that you will not use too much RAM memory. You can use the unit split command for that. Extracting resultsILLUMINA generates two different output per calculation:
To extract the data, the extract-output-data.py script is used. It can extract either the value of the diffuse radiance or all available components. It can also extract the contribution maps. Moreover, filters are available to only process certain parameter values. Documentation is available by calling extract-output-data.py -h The script will output the data directly, so it should be redirected to a file with extract-output-data.py > results.txt If you are not only interested in the total diffuse radiance (clouds + preceeding atmosphere) and want also to extract the cloud contribution to the radiance and the direct and direct reflected radiance, you will neet to run the script in the full mode. The script will output the data directly, so it should be redirected to a file with extract-output-data.py -f > results.txt There will be a column for each radiometric value. As stated in the documentation, the contribution map can be extracted using the `-c` flag. extract-output-data.py -c > results.txt Units of the radiances are W/sr/m^2/nm. To get the radiance of a spectral bin, one must multiply the radiance delivered by Illumina with the bandwidth (in nm). Units of the irradiances are W/m^2/nm To get the irradiance of a spectral bin, one must multiply the irradiance delivered by Illumina with the bandwidth (in nm). PCL binary files (XXXXX_pcl.bin) do not have any units. The values represents the fractional contribution of a pixel to the total diffuse radiance. The sum of all pixels gives 1. PCL files at different resolution are combined into a HDF5 file to create the total diffuse radiance contribution file in units of W/sr/m^2/nm. These files shoud be named the following way: elevation_angle_XX-azimuth_angle_YY-wavelength_ZZZ.Z.hdf5 Analysing the resultsThe analysis can be done with your favorite tools. We strongly recommend the use of python and provide convenience functions in the pytools and hdftools packages provided with ILLUMINA. Transforming to magnitude per arc sq seconds (for astronomers...)Transforming diffuse radiances to sky brightnesses (SB) in units of mag/sqarcsec is not an simple task. First of all you have to consider that illumina is only dealing with the artificial component of the SB. If you are using illumina in a relatively dark site, the artificial SB can represents only a small part of the total SB. To transform radiance to total SB, you will need a relevant estimate of the natural contribution to the total SB. The natural SB is highly variable with time, altitude, season, observing direction etc. It is composed of many sources like the zodiacal light, the starlight, the sky glow, the Milky Way etc. Given that complexity, we suggest to determine it experimentally for the modeled site and period you are interested in. To do it, you need an in situ measurement of the total SB from which you will be able to extract the natural component and then eventually consider this component as a constant natural contribution to the SB for your specific site and period, no matter the viewing angle or light inventory, obstacles properties etc. Lets call the radiance responsible for that natural contribution the background radiance ({#R_{bg}#}). Lets assume that you have an in situ measurement of the total Johnson-Cousins SB. You need to accomplish the following steps to convert your artificial modeled radiance to total SB. Integrate your radiances {#{R_a}#} and according to the Johnson-Cousins filter. {#{R_a}#} being the modeled radiance you want to convert to {# SB #}. The sensitivity curve of Johnson-Cousins filters are provided in the Example/Lights folder (e.g. JC_V.dat). {#R_{bg}#} the radiance corresponding to the natural level of the sky brightness ({#SB_{bg}#}) can be estimated using the SB data provided by Benn & Elison 1998, except for the R band that was taken from La Palma P99 ASTMON (2018-2019) measurements. We simply added 0.03 from the B and V values provided for La Palma (as recommended by Benn & Elison 1998) and then used the formula below to calculate the radiance.
{## R_{bg} = R_0 10^{-0.4 SB_{bg}} ##} For a value of modeled artificial radiance ({#{R_a}#}), use the following formulae to convert to total SB: {## SB = -2.5 log10 \left( \frac{{R_a} + R_{bg}}{R_0} \right) ##} {#R_0#} are derived from zero points obtained by Bessell 1979 calibration (DOI 10.1086/130542) and given in the table below.
Quick git using guide for developersTo publish your locally modified codes on the servergit commit -u graphycs1 -m "new printout" git push origin master To update your local version of the codesgit pull |