lbc_lnk is done for all diagnostic variables in LOBSTER, e.g. trcbio.F90:
CALL lbc_lnk( zw2d(:,:,jl),'T', 1. )
Q: do I need to do this in MY_TRC?
"tra" seems to represent the tracer concentration in the following time step, therefore I need to update tra instead of trn! The words "update the trend" are confusing to me (it seems the time rate of change is represented by tra. rather it should say update tra with the trend (or flux)).
tra, trn, trb for the next time step will be updated by TRP/trcnxt.F90, as well as the OBC and LBC are computed.
Actually, "tra" does represent the time rate of change in tracer concentration. For example PISCES/p4zflx.F90:
! compute the trend
tra(ji,jj,1,jpdic) = tra(ji,jj,1,jpdic) + ( zfld - zflu ) / fse3t(ji,jj,1)
where zfld - zflu represent the flux [mol/m2/s] which is divided by fse3t (vertical scale factor; thickness of first model layer??) [m]. Thus the division gives [mol/m3/s].
2016年9月30日金曜日
NaN in tracer.stat
I finally found the reason for segmentation fault (I think). It is because at least one of the newly added tracers (icechl, iceno3, and/or icenh4 in this case) has value of NaN, as seen in tracer.stat (which calculates sum of all 3d tracers, see ztrai in trcstp.F90):
1 0.7883947556E+03
2 NaN
This happens at the second time step. This means that either or both DMSPd or/and DMS becomes NaN in the second time step...
Actually this was not the cause of segmentation fault, as the run passed the second time step when key_my_trc_iceeco was excluded:
1 0.7883947556E+03
2 NaN
3 NaN
4 NaN
5 NaN
Anyway, it is good to know that my 3d tracers are erraneous as well :(
1 0.7883947556E+03
2 NaN
This happens at the second time step. This means that either or both DMSPd or/and DMS becomes NaN in the second time step...
Actually this was not the cause of segmentation fault, as the run passed the second time step when key_my_trc_iceeco was excluded:
1 0.7883947556E+03
2 NaN
3 NaN
4 NaN
5 NaN
Anyway, it is good to know that my 3d tracers are erraneous as well :(
2016年9月29日木曜日
time step for tracers in MY_TRC
PISCES/trcini_pisces.F90
rfact = rdttrc(1)
HH this should equal rdt (time step for dynamics) unless different tracer time step is given.
rfact2 = rfact / float (nrdttrc)
HH this would still be rfact unless nrdttrc (time step for biology) is different from tracers.
xstep = rfact2 / rday
HH rday = 86400 sec/day (defined in OPA_SRC/DOM/phycst.F90), so xstep gives the time step in days.. PISCES uses xstep when the rate constant is given in units of d-1. I will avoid using xstep and instead convert the rate constant to s-1 during the initialization as in FABM.
rfact = rdttrc(1)
HH this should equal rdt (time step for dynamics) unless different tracer time step is given.
rfact2 = rfact / float (nrdttrc)
HH this would still be rfact unless nrdttrc (time step for biology) is different from tracers.
xstep = rfact2 / rday
HH rday = 86400 sec/day (defined in OPA_SRC/DOM/phycst.F90), so xstep gives the time step in days.. PISCES uses xstep when the rate constant is given in units of d-1. I will avoid using xstep and instead convert the rate constant to s-1 during the initialization as in FABM.
addting a new variable in netcdf
! Re: create new OBC files that include DMS and DMSPd in the water column.
! To run the DMS model, I need to set an open boundary condition for dms and dmspd. For now, I will just create an OBC that has a minimum value (0.01) for all time/space, such as done for some of the PISCES variables (e.g. ZOO). In the current setup, the same OBC files are used (obc_[east,west]_trc_cmoc_y0000m00_canesm.nc) for interannual runs, so I will create two new files (east and west, although they may be identical) that include both dms and dmspd.
! Copy the source OBC files into the tmp folder.
cp obc_east_trc_cmoc_y0000m00_canesm.nc ~/tmp/
cp obc_west_trc_cmoc_y0000m00_canesm.nc ~/tmp/
! Extract the OBC data for ZOO and ZOO2 (which are all constant values of 0.01) and save to a new file.
ncks -v ZOO,ZOO2 obc_east_trc_cmoc_y0000m00_canesm.nc tmp_east.nc
ncks -v ZOO,ZOO2 obc_west_trc_cmoc_y0000m00_canesm.nc tmp_west.nc
! Rename the variables to dms and dmspd.
ncrename -O -v ZOO,dms tmp_east.nc
ncrename -O -v ZOO,dms tmp_west.nc
ncrename -O -v ZOO2,dmspd tmp_east.nc
ncrename -O -v ZOO2,dmspd tmp_west.nc
! Add the new variables to the existing files.
ncks -A -v dms,dmspd tmp_east.nc obc_east_trc_cmoc_y0000m00_canesm.nc
ncks -A -v dms,dmspd tmp_west.nc obc_west_trc_cmoc_y0000m00_canesm.nc
! Rename the files, so that I don't get confused with the source files.
mv obc_east_trc_cmoc_y0000m00_canesm.nc hh_obc_east_trc_cmoc_y0000m00_canesm.nc
mv obc_west_trc_cmoc_y0000m00_canesm.nc hh_obc_west_trc_cmoc_y0000m00_canesm.nc
! Remove the tmp files.
rm tmp_east.nc tmp_west.nc
! To run the DMS model, I need to set an open boundary condition for dms and dmspd. For now, I will just create an OBC that has a minimum value (0.01) for all time/space, such as done for some of the PISCES variables (e.g. ZOO). In the current setup, the same OBC files are used (obc_[east,west]_trc_cmoc_y0000m00_canesm.nc) for interannual runs, so I will create two new files (east and west, although they may be identical) that include both dms and dmspd.
! Copy the source OBC files into the tmp folder.
cp obc_east_trc_cmoc_y0000m00_canesm.nc ~/tmp/
cp obc_west_trc_cmoc_y0000m00_canesm.nc ~/tmp/
! Extract the OBC data for ZOO and ZOO2 (which are all constant values of 0.01) and save to a new file.
ncks -v ZOO,ZOO2 obc_east_trc_cmoc_y0000m00_canesm.nc tmp_east.nc
ncks -v ZOO,ZOO2 obc_west_trc_cmoc_y0000m00_canesm.nc tmp_west.nc
! Rename the variables to dms and dmspd.
ncrename -O -v ZOO,dms tmp_east.nc
ncrename -O -v ZOO,dms tmp_west.nc
ncrename -O -v ZOO2,dmspd tmp_east.nc
ncrename -O -v ZOO2,dmspd tmp_west.nc
! Add the new variables to the existing files.
ncks -A -v dms,dmspd tmp_east.nc obc_east_trc_cmoc_y0000m00_canesm.nc
ncks -A -v dms,dmspd tmp_west.nc obc_west_trc_cmoc_y0000m00_canesm.nc
! Rename the files, so that I don't get confused with the source files.
mv obc_east_trc_cmoc_y0000m00_canesm.nc hh_obc_east_trc_cmoc_y0000m00_canesm.nc
mv obc_west_trc_cmoc_y0000m00_canesm.nc hh_obc_west_trc_cmoc_y0000m00_canesm.nc
! Remove the tmp files.
rm tmp_east.nc tmp_west.nc
2016年9月28日水曜日
20160928
When key_my_trc_iceeco is activated, the run stops here (ocean.output):
trcdit_wr : write NetCDF passive tracer concentrations at 3 time-step
~~~~~~~~~
But when I submit the job with "#PBS -l nodes=1:ppn=12" as opposed to proc=12, then the run went one step further:
trcdit_wr : write NetCDF passive tracer concentrations at 3 time-step
~~~~~~~~~
trcdii_wr : write NetCDF additional arrays at 3 time-step
~~~~~~
Actually, I repeated the run with "#PBS -l nodes=1:ppn=12", but this time it did not reproduce the same result as before, but the same result with 'procs=12' case... very strange.
But still being killed due to 174 seg fault and 78 error (not sure what they mean):
forrtl: severe (174): SIGSEGV, segmentation fault occurred
forrtl: error (78): process killed (SIGTERM)
It seems that this happens at the third time step (I tested with "write" statements).
Maybe I need to ask for more procs? Try with procs=24:
The run went bit further.
Now increase vmem=3000mb:
No further progress.. So I don't think it's the lack of memory/processors.
I will look into MY_TRC again..
trcdit_wr : write NetCDF passive tracer concentrations at 3 time-step
~~~~~~~~~
But when I submit the job with "#PBS -l nodes=1:ppn=12" as opposed to proc=12, then the run went one step further:
trcdit_wr : write NetCDF passive tracer concentrations at 3 time-step
~~~~~~~~~
trcdii_wr : write NetCDF additional arrays at 3 time-step
~~~~~~
Actually, I repeated the run with "#PBS -l nodes=1:ppn=12", but this time it did not reproduce the same result as before, but the same result with 'procs=12' case... very strange.
But still being killed due to 174 seg fault and 78 error (not sure what they mean):
forrtl: severe (174): SIGSEGV, segmentation fault occurred
forrtl: error (78): process killed (SIGTERM)
It seems that this happens at the third time step (I tested with "write" statements).
Maybe I need to ask for more procs? Try with procs=24:
The run went bit further.
Now increase vmem=3000mb:
No further progress.. So I don't think it's the lack of memory/processors.
I will look into MY_TRC again..
2016年9月21日水曜日
interpolating sea ice concentration onto NAA grid
1) Download the ice concentration data from http://nsidc.org/data/G10010
2) I am only interested in satellite data from 1979 to 2013, so extract data only for that time period:
ncea -F -d time,1549,1968 G10010_SIBT1850_v1.1.nc G10010_SIBT1850_v1.1_HH1979_2013.nc
3) Convert the data into NAA grid using SOSIE:
../bin/sosie.x -f namelist.ice2naa
4) Compute the monthly climatology (ref: http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html):
for k in `seq 1 12`; do `echo "ncra -F -d time_counter,$k,420,12 icecon_G10010_SIBT1850_v1.1-naa_icecon.nc4 $k.nc"`; done
5) Concatenate the climatology for each month into a single file (ref: http://research.jisao.washington.edu/data_sets/nco/#example8):
ncrcat -h [0123456789].nc 1[012].nc icecon_mon_clim.nc
2) I am only interested in satellite data from 1979 to 2013, so extract data only for that time period:
ncea -F -d time,1549,1968 G10010_SIBT1850_v1.1.nc G10010_SIBT1850_v1.1_HH1979_2013.nc
3) Convert the data into NAA grid using SOSIE:
../bin/sosie.x -f namelist.ice2naa
4) Compute the monthly climatology (ref: http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html):
for k in `seq 1 12`; do `echo "ncra -F -d time_counter,$k,420,12 icecon_G10010_SIBT1850_v1.1-naa_icecon.nc4 $k.nc"`; done
5) Concatenate the climatology for each month into a single file (ref: http://research.jisao.washington.edu/data_sets/nco/#example8):
ncrcat -h [0123456789].nc 1[012].nc icecon_mon_clim.nc
2016年9月10日土曜日
Updated python script for Lana DMS climatology csv to netCDF conversion
The script has been updated to:
Reference used to develop the script: http://qiita.com/okadate/items/954574a95545b06ca257
csv2nc.py
import netCDF4
from pylab import *
from pandas import read_csv
ncout = netCDF4.Dataset('test.nc','w')
fmonth = ['JAN','FEB','MAR','APR','MAY','JUN','JUL','AUG','SEP','OCT','NOV','DEC']
for x in range(12):
data=array(read_csv('../lana_dms_src/DMSclim_'+fmonth[x]+'.csv',header=0))
if x == 0:
dim0 = size(data,0)
dim1 = size(data,1)
lats_out = 89.5 - 1.0*arange(dim0,dtype='float32')
lons_out = -179.5 + 1.0*arange(dim1,dtype='float32')
ncout.createDimension('latitude',dim0)
ncout.createDimension('longitude',dim1)
ncout.createDimension('time',None)
lats = ncout.createVariable('latitude',dtype('float32').char,('latitude',))
lons = ncout.createVariable('longitude',dtype('float32').char,('longitude',))
lats.units = 'degrees_north'
lons.units = 'degrees_east'
lats[:] = lats_out
lons[:] = lons_out
time = ncout.createVariable('time', dtype('int32').char, ('time',))
time.long_name = 'Climatological Month'
time.units = 'days since 1968-05-23 00:00:00'
time[:] = 1 + arange(len(fmonth),dtype='float32')
dms = ncout.createVariable('DMS',dtype('float32').char,('time','latitude','longitude'))
dms.units = 'nmol L-1'
data_all = empty((len(fmonth),dim0,dim1))
data_all[x,:,:] = data
dms[:,:,:] = data_all
ncout.close()
- include the time variable to facilitate the SOSIE interpolation.
- The order of dimensions are set in a way that ferret would recognize them correctly (time, lat, lon) which gets translated in ferret as (L, J, I).
- NOTE: read_csv function has a flag header, which should be None for my Mac, but 0 for my office computer in order to extract the correct number of data in row (=180). Not sure why, but they behave differently.
- included the loop to save all 12 months in one file.
Reference used to develop the script: http://qiita.com/okadate/items/954574a95545b06ca257
csv2nc.py
import netCDF4
from pylab import *
from pandas import read_csv
ncout = netCDF4.Dataset('test.nc','w')
fmonth = ['JAN','FEB','MAR','APR','MAY','JUN','JUL','AUG','SEP','OCT','NOV','DEC']
for x in range(12):
data=array(read_csv('../lana_dms_src/DMSclim_'+fmonth[x]+'.csv',header=0))
if x == 0:
dim0 = size(data,0)
dim1 = size(data,1)
lats_out = 89.5 - 1.0*arange(dim0,dtype='float32')
lons_out = -179.5 + 1.0*arange(dim1,dtype='float32')
ncout.createDimension('latitude',dim0)
ncout.createDimension('longitude',dim1)
ncout.createDimension('time',None)
lats = ncout.createVariable('latitude',dtype('float32').char,('latitude',))
lons = ncout.createVariable('longitude',dtype('float32').char,('longitude',))
lats.units = 'degrees_north'
lons.units = 'degrees_east'
lats[:] = lats_out
lons[:] = lons_out
time = ncout.createVariable('time', dtype('int32').char, ('time',))
time.long_name = 'Climatological Month'
time.units = 'days since 1968-05-23 00:00:00'
time[:] = 1 + arange(len(fmonth),dtype='float32')
dms = ncout.createVariable('DMS',dtype('float32').char,('time','latitude','longitude'))
dms.units = 'nmol L-1'
data_all = empty((len(fmonth),dim0,dim1))
data_all[x,:,:] = data
dms[:,:,:] = data_all
ncout.close()
2016年9月8日木曜日
How to convert the original Lana DMS climatology (in the CSV format) into the netcdf format
import netCDF4
from pylab import *
from pandas import read_csv
ncout = netCDF4.Dataset('test.nc','w')
data=array(read_csv('../dmsclimatology/DMSclim_APR.csv',header=None))
dim0 = size(data,0)
dim1 = size(data,1)
lats_out = 89.5 - 1.0*arange(dim0,dtype='float32')
lons_out = -179.5 + 1.0*arange(dim1,dtype='float32')
ncout.createDimension('latitude',dim0)
ncout.createDimension('longitude',dim1)
lats = ncout.createVariable('latitude',dtype('float32').char,('latitude',))
lons = ncout.createVariable('longitude',dtype('float32').char,('longitude',))
lats.units = 'degrees_north'
lons.units = 'degrees_east'
lats[:] = lats_out
lons[:] = lons_out
dms = ncout.createVariable('DMS',dtype('float32').char,('latitude','longitude'))
dms.units = 'nmol L-1'
dms[:] = data
ncout.close()
If you want to use this script, you would need to make sure you have those three modules and change the file name associated with the read_csv function to your own.
The output looks like this (on the right) and you can confirm that this is consistent with the published figure (on the left). NOTE: the color bar scales are different, so they don't look exactly the same.
登録:
投稿 (Atom)