Menu |
Prof /
IlluminaGuide2016ILLUMINA V1 USER'S GUIDEMartin Aubé, Ph.D. & Alexandre Simoneau M.Sc. copyright 2016 Latest update: 2020-02-11 This version is deprecated. Please refer to IlluminaGuide2021 for the most up-to-date one. General informationsThis users guide aim to help users of the Illumina sky brightness model to prepare and manage their own simulations. We hope that the document is accurate enough but will be happy to improve it according to some difficulties you may encounter when using it. For any help please contact the PI Dr Martin Aubé (martin.aube at cegepsherbrooke.qc.ca). This guide is the most recent one that incorporate recently added features to the model like the hyperspectral support, the subgrid obstacles filling factor and the contribution of the cloud reflexion. Operating systemIllumina should be used with a computer running under Linux with Fortran and gcc compilers (e.g. gfortran) and mercurial versioning system installed. A few convenience scripts also require Python (2.7). Other software dependenciesThe following softwares are needed by the system
Other python libraries might be required.update In all cases, the most recent version of the code should be used. The code is evolving rapidly and then by updating your version frequently, you will benefit of new features and bug fixes. Installing the codeILLUMINA model is available from bitbucket: https://bitbucket.org/aubema/illumina All sources codes are released under GNU public licence. To install the model from bitbucket, please follow these steps: If you are a non developer user: cd mkdir hg cd hg git clone https://github.com/aubema/illumina.git update If you don't respect exactly the directory structure, the programs may fail to execute properly. Then you must select the desired version and compile the code: cd cd hg/illumina git checkout v1 bash makeILLUMINA Then modify the $HOME/.bashrc file by typing the following commands in the terminal window. This will make the programs executable from anywhere on the file system. echo "PATH=$PATH:$HOME/hg/illumina/:$HOME/hg/illumina/bin" >> $HOME/.bashrc source $HOME/.bashrc If you are graphycs1 developer, you also want to edit the PYTHONPATH environment variable: echo "export PYTHONPATH=$PYTHONPATH:$HOME/hg/illumina/" >> $HOME/.bashrc source $HOME/.bashrc Downloading and preparing the required satellite imagesILLUMINA requires some satellite data to run properly, namely a digital elevation model, the ground reflectance at multiple wavelength and the nocturnal light emittance. Theses data also need to be projected in a suitable spatial reference system and clipped to the simulation domain. Domain definitionDefining the simulation domain is crucial in the input preparation, as it directly affects everything afterwards. The first step is to define the location(s) where the simulation of the artificial sky brightness is desired. Then, the projection needs to be defined, as the model need to work with coordinates in meters instead of degrees. For this, the EPSG.io website can be used. Simply search for the country in which the simulation point(s) is/are located and select a projection that covers the region with sufficient accuracy. It is then needed to find the limits of the desired domain in this reference system. The coordinate transform tool on the above website can be useful in that regard. One must create a parameters file with the ini extension to define the domain. It should be of the following format: srs: epsg:3750 bbox: 517107 2047513 1017107 2447513 pixsize: 1000 where srs is the spatial reference system in a format supported by the gdal program, bbox is the domain extent, with the coordinate of the left, bottom, right and top borders respectively and finally pixsize is the desired resolution of the pixels in meters. Alternatively, the defineDomain.py script can be used to generate the domain.ini file. The code ask for the coordinates of the observer and then defines a domain centered around it. It also automatically suggest a valid projection to use. VIIRS-DNB imageryThe night emittance is obtained from VIIRS-DNB imagery that can be found here. One should download the appropriate tile(s) for the period desired (year and month). We suggest the VCMSLCFG configuration in the monthly composite product because of the stray light correction, but the choice is left to the user. You will want to extract the 'avg_rade9.tif' file as it's the actual values, whereas the 'cf_cvg.tif' file contain information related to the quality of the image. The tif file(s) should be placed inside a subfolder named VIIRS-DNB inside your experiment directory. SRTM dataThe digital elevation model is made with the SRTM elevation data that can be found here. One should use the spatial filter to select only the required tiles, and then follow the download procedure. The extracted hgt files should be placed inside a subfolder named SRTM inside your experiment directory. To extract multiple archives at once, one can use MODIS reflectance dataThe reflectance data is obtained from MODIS imagery (MYD09A1 Version 6 product). The data can be found here. One should use the spatial and temporal filter to select only the required tiles for the desired period, and then follow the download procedure. It's a good idea to check for the presence of clouds, as they can affect the data. The hdf files should be placed inside a subfolder named MODIS inside your experiment directory. To create the best inventory, you must choose the MODIS date to fit as much as possible the VIIRS-DNB date. Processing the input imagesThe Illuminutils.py script should be executed from the experiment directory containing all three data subfolders explained above. Seven files should be produced by this script:
The csv files allows for easy transformation between lat/lon coordinate systems to pixel position.
Sample files for Hawaii The standard format is raw binary and the pixel size is constant in meters (1000m per pixel). Making light inventory for each zoneWe have to create an inventory file. In Illumina, inventory files allow the user to define different geographical zones with different mix of type of lamps (different by their photometry function or light output pattern (LOP), their spectrum, lamp height), average distance between obstacles, obstacles height and obstacle filling factor). Two or more zones may be in the same geographical region or partly overlapping. All zones are defined as circle shaped zones characterized by a center position and a radius. Each new zone overwrite the previous in case of intersection between the zones. All the points that are not included in a zone will be ignored. To create a zone you have to edit an ASCII file with a simple text editor like kwrite or gedit following the format shown below: Sample inventory file for the Hawaii territory# lat lon R hobs dobs fobs hlamp Zone inventory Comment 21.4474 -157.9712 50 7 25 0.5 7 90_H_5 10_M_10 # Oahu 21.0052 -157.0123 40 7 25 0.5 7 90_H_5 10_M_10 # Molokai + Lanai 20.7764 -156.1512 64 7 25 0.5 7 18_H_10 72_H_0 10_M_10 # Maui 19.6468 -155.5714 103 7 25 0.5 7 87_L_10 8_H_10 5_M_5 # Big Island 19.2878 -155.2179 23 7 25 0.5 7 0_L_0 # Lava This file can have any number of header lines as long as they are preceded by a '#' symbol. Anything on the same line following this symbol will not be considered by the model. Each data line contains several parameters:
Each lamp characteristics is composed of three fields separated by '_'.
The weighting is applied on the luminous flux of the spectral power distribution of the lamp in lumen. This means that the spectrums are weighted by the photopic sensitivity curve. Example If one zone is composed of 50% of HPS with the angular photometry toto1_fctem.dat, 20% of HPS with phtotometry file toto2_fctem.dat, and 30% of LED4000K with photometry file toto3_fctem.dat and let assume that you use the spectral power distributions provided in the Example/Lights directory HPS_Helios.spct and 4LED_LED-4000K-Philips.spct to create the light inventory. Then you should write 50_HPS_toto1 20_HPS_toto2 30_4LED_toto3 at column 8 of your inventory file for that zone. The data referenced by the last two fields must be located in a subfolder named 'Lights'. This folder must contain the following files in addition to the ones used to define the lamp inventory :
Theses files can be found in the Illumina installation directory (Examples/Lights). Any additional file used to characterize the lamp must follow the following format :
The normalization of all theses files is not important, as it will be done by the programs. *In all cases, any characters preceding the first underscore (_) in the lop or spct file name is the reference word that must be written in the inventory file. All .spct files must have the same wavelength scale. All LOP Files must share the same angle scale. This inventory file can be generated from a list of geolocalised lights sources with individual characteristics written in the following format: # lat lon pow hobs dobs fobs hlamp spct lop 21.295693 -157.856846 250 20 25 0.9 7 MH 5 21.295776 -157.856782 150 20 25 0.9 7 LED 0 21.295844 -157.857114 100 30 30 0.85 7 MH 5 21.286488 -157.845900 100 50 10 0.3 10 LED 1 where
This file and the domain definition file from step 5.1 can then be used with the `point_source_to_zon.py` script to generate an inventory file to wich uniform zones can be added. Keep in mind that the zone defined last has priority. Create the reflectance file listWe are using MODIS reflectance product in 4 bands. The reflectance at a given wavelength will be interpolated or extrapolated using the MODIS data. One needs to create a file to associate the modis files to the correct wavelength, where the first line tells you how many reflectance images you have. The names are the file name excluding the path to those files. The path will be needed later. The wavelength are in nanometer. This file is usually named modis.dat but any other name can be used. 4 469 refl_b03.bin 555 refl_b04.bin 645 refl_b01.bin 858 refl_b02.bin Run make_inputs scriptRun the script make_inputs.py with files created at steps 5.5, 6.1 and 6.2 as summarized below:
Example runHere is the input needed to execute the make_inputs.py script for the 'Example' directory present in the Illumina installation folder :
Quick file checkThe script should produce two directories, 'Lamps' and 'Intrants', and two files, 'exp_name.zon' and 'integration_limits.dat', where exp_name is the experiment name, a parameter given at step 7 in the 'make_inputs.py' script. The 'Lamps' directory should contain three files for each zone defined in the inventory file : The angular and spectral photometry of each zone in binary (.bin), ASCII (.dat) and plotted (.png). The 'Intrants' directory should contain N*X 'fctem_wl_XXX_zon_NNN.dat' files, where N is the number of zones and X the number of wavelength used. In the case of the example, both these numbers are 5, and so you should have 25 'fctem' files. You should also have 'lumlp' files, one for each wavelength and zone combination plus one for each wavelength, giving the global view. There also must be 4 files named 'exp_name_altlp.bin', 'exp_name_obstd.bin' 'exp_name_obsth.bin' and 'exp_name_obstf.bin'. You should also find one 'modis_XXX.bin' file per wavelength and two '.lst' files. In addition to all that, you should see some symbolic links, one for 'integration_limits.dat', one for 'srtm.bin' and one '.mie.out' for each wavelength. For the example given, that represent a total of 72 files. You can open all the _XXX_lumlp_NNN.bin files with imagemagick to confirm that they are realistic according to the specific experiment. lumlp files are giving the total lamp spectral flux for any pixel of a zone at the given wavelength. Example run at 605nm with contrast boosted
Zone 5, wich is the lava lakes, is all black because its light isn't considered in the model. To achieve this, its lamp inventory was empty. You can verify that in the 'inventory.txt' file. Preparation of the batch fileThis file is a bash program that will generate a lot of running experiments according to the number of desired cases for each variable. We must define the different variables that will be used by makeBATCH-multispectral.bash program. This is done by editing a file named makeBATCH.in (note that you can use another name). One file can be created for each experiment, if required. An sample makeBATCH.in file is provided below: # Illumina's makeBATCH-multispectral.bash input file # batch_file_name TortureMammouth # this is the base name of the different script that will be submitted to the queue pixel_size 1000 # pixel size in meter experiment_name Hawaii # modeling experiment name pressure 101.3 # lowest domain level atmospheric pressure in KPa estimated_computing_time 120 # estimated computing time per case in hours terrain_elevation_file srtm.bin # bin file containing the elevation model relative_humidity 70 # atmospheric reltive humidity aerosol_model maritime # aerosol model (rural, urban, maritime) cloud_model 0 # cloud model selection 0=clear, 1=Thin Cirrus/Cirrostratus, 2=Thick Cirrus/Cirrostratus, 3=Altostratus/Altocumulus, 4=Cumulus/Cumulonimbus, 5=Stratocumulus nearest_source_distance 150 # minimal distance allowed to the nearest light source (m) 1_radius 27 # length of the side of a square within which the simulation resolution will be one pixel (full resolution) - must be an odd multiple of 9 (e.g. 9, 27, 45, 63, 81, ...) 3_radius 81 # length of the side of a square within which the simulation resolution will be tree pixels wide (pixel) - must be an odd multiple of 9 but larger than full resolution radius. Outside this radius the resolution will be nine pixels wide. stop_limit 15000. # Stop computation when the new voxel contribution is less than 1/stoplim of the cumulated flux (suggested value = 15000.) but for purely theoritical works (e.g. only one pixel on) you must increase this number at least to 1E10 x_positions 269 # list of x positions of the observer (pixel) y_positions 245 # list of y positions of the observer (pixel) Linked to x_positions z_positions 0. # list of z positions above ground of the observer (m) scattering_skip 71 # list of 2nd scattering acceleration factor (must me a prime number) scattering_radius 4000 # list of maximum 2nd scattering radius (m) elevation_angles 90 45 # list of elevation viewing angles (deg) azimuth_angles 0 10 # list of azimuth viewing angles aerosol_optical_depth 0.11 # list of AOD values at 500 nm angstrom_coefficients 0.7 # list of angstrom exponents values. Linked to angstrom_optical_depth If you used Illumina prior to Dec 4 2017, you have to consider that z_position is now defined as the elevation above local ground level. It was previously the voxel number along z axis. Submitting the calculations to a Linux clusterTo perform the calculations, we now connect to a 'Cluster'. In our case, we connected to 'Mammouth serial II' located at Université de Sherbrooke. Then it is necessary to recompile the ILLUMINA model using 'makeILLUMINA' or through the following command: bash makeILLUMINA The `Intrants` directory created for each experiment in step 6 should now be transferred to the cluster interactive node via the scp protocol. Preparing the batch executionNow we need to run the program makeBATCH-multispectral.bash that will run the multiple calculations for each experiment directory on the cluster. nohup bash makeBATCH-multispectral 'path_to_makeBATCH.in/filename_makeBATCH.in' &
list=`qstat @ms -u $(whoami)`;for i in $list; do if [ `echo $i | grep ms ` ] ; then echo $i;qdel $i;sleep 0.01; fi;done To execute the calculations, perform the following command: bash 'output_batch_file' This step should be repeated for each season or period and each relative humidity value. Extracting resultsILLUMINA generates two image files by calculation, a file showing the contribution of each pixel to the calculated sky radiance (PCL) and the other illustrating the sensitivity to light pollution (PCW). It also produces the sky radiance value in the specified direction. To extract the data, you have to create a list of files to extract. This is done by concatenating the 'output_batch_file' files produced in step 9. To do so go to the $HOME directory and type: cat 'output_batch_file'* > exp_name.list Using the same string used in step 9. To extract the data simply execute extract-output-data.bash exp_name exp_name.list At the end of the execution of extract-output-data.bash a Results directory has been created and it will contain all PCL files (ex .: PCL-x302y58-2005-ta0 .05-wl436-el15-az0.bin), all PCW files (ex.L PCW-x302y58-2005-ta0.05-wl436-el15-az0.bin) and .out files (the log of each execution). The script 'extract_more.py' can be useful to get important information out of all the files produced by the previous script. Simply call it from where you want the results to be produced. It will ask you a few parameters :
Alternatives scenariosIllumina can simulate alternative lighting scenarios using the same lights intensity normalized in photopic vision. To do this, you simply need to build your alternative lamps inventory in a similar manner as the current inventory. Alternative inventory files must be following the format shown below: # Zone inventory 1_4LED_0 # Oahu 1_4LED_0 # Molokai + Lanai 1_4LED_0 # Maui 1_4LED_0 # Big Island 0_4LED_0 # Lava Where you only have a similar lamp inventory as in the original inventory file, but without the zone characteristics. Once you created that file, you can execute the 'alt_scenario_maker.py' script. Here is what the needed input is:
Extracting resultsIf you simulated multiple scenarios, you must organize manually each results folder in a single Results directory, where the name of each subfolder ends with an underscore followed by a string identifying the scenario. For instance, you could have something like `Hawaii_current` and `Hawaii_LED`. You then can use the python script extract_more.py. This script will generate data files for each experiment parameters containing the LP spectrum for each scenario. It will also produce a few graphs comparing the scenarios and data cube in the FITS format for each experiment parameter group and scenario. These can then be opened by your favorite FITS program, for instance SAOImage DS9. Quick mercurial using guide for developersTo publish your locally modified codes on the serverhg commit -u graphycs1 -m "new printout" hg push To update your local version of the codeshg pull hg update |