In LOBSTER, lbc_lnk is called for the prognostic variables in the uppermost layer only:
LOBSTER/trcbio.F90: CALL lbc_lnk( zw2d(:,:,jl),'T', 1. )
LOBSTER/trcbio.F90: CALL lbc_lnk( zw3d(:,:,:,jl),'T', 1. )
LOBSTER/trcbio.F90: CALL lbc_lnk( trbio(:,:,1,jl),'T', 1. )
LOBSTER/trcexp.F90: CALL lbc_lnk( sedpocn, 'T', 1. )
LOBSTER/trcini_lobster.F90: CALL lbc_lnk( cmask, 'T', 1. )
While in PISCES, it is called for the entire water column:
PISCES/p4zsed.F90: CALL lbc_lnk( zcmask , 'T', 1. ) ! lateral boundary conditions on cmask (sign unchanged)
PISCES/p4zsed.F90.v1: CALL lbc_lnk( zcmask , 'T', 1. ) ! lateral boundary conditions on cmask (sign unchanged)
PISCES/p4zsed.F90.v2: CALL lbc_lnk( zcmask , 'T', 1. ) ! lateral boundary conditions on cmask (sign unchanged)
PISCES/trcsms_pisces.F90: CALL lbc_lnk( trn(:,:,:,jn), 'T', 1. )
PISCES/trcsms_pisces.F90: CALL lbc_lnk( trb(:,:,:,jn), 'T', 1. )
PISCES/trcsms_pisces.F90: CALL lbc_lnk( tra(:,:,:,jn), 'T', 1. )
PISCES/trcsms_pisces.F90: CALL lbc_lnk( trn(:,:,:,jn), 'T', 1. )
While CFC and C14b do not use "lbc_lnk" subroutine.
2016年10月20日木曜日
2016年10月15日土曜日
sendmail
In qsub, 'mail' or 'mailx' command is not available.
Alternatively, I can use 'sendmail' to do the same task as 'mail' or 'mailx'. This is especially useful for Nestor becuase Nestor does not have mail notification option in PBS as of now.
Examples of sendmail:
echo -e "subject \n body message" | sendmail email@email.com
Here, the \n separates the subject and the body message (http://unix.stackexchange.com/a/168233).
The sender is the hostname of the processor (?) you are using: e.g. hermes0199.westgrid.uvic.ca.
Unfortunately, some email client (like my old university's) blocked receiving email from this address for some reason.
So the solution here is the specify the sender (to some existing email address) with -f flag:
echo -e "subject \n body message" | sendmail -f "sender@sender.com" email@email.com
You can also specify the sender name with -F option:
echo -e "subject \n body message" | sendmail -F "Me from Nestor" -f "sender@sender.com" email@email.com
Alternatively, I can use 'sendmail' to do the same task as 'mail' or 'mailx'. This is especially useful for Nestor becuase Nestor does not have mail notification option in PBS as of now.
Examples of sendmail:
echo -e "subject \n body message" | sendmail email@email.com
Here, the \n separates the subject and the body message (http://unix.stackexchange.com/a/168233).
The sender is the hostname of the processor (?) you are using: e.g. hermes0199.westgrid.uvic.ca.
Unfortunately, some email client (like my old university's) blocked receiving email from this address for some reason.
So the solution here is the specify the sender (to some existing email address) with -f flag:
echo -e "subject \n body message" | sendmail -f "sender@sender.com" email@email.com
You can also specify the sender name with -F option:
echo -e "subject \n body message" | sendmail -F "Me from Nestor" -f "sender@sender.com" email@email.com
2016年10月13日木曜日
slow because of jasper or MY_TRC?
qstat -u
2000mb 35:59:59 R 30:13:58
tail tracer.stat
9529 0.9203073862E+03
9530 0.9203655932E+03
9531 0.9204196679E+03
9532 0.9204798224E+03
9533 0.9205345088E+03
9534 0.9205939773E+03
9535 0.9206496257E+03
9536 0.9207078335E+03
9537 0.9207647118E+03
9538 0.9208231441E+03
The end time step is 26280, so I killed the process.
This slow computing may be due to MY_TRC. I will check.
2000mb 35:59:59 R 30:13:58
tail tracer.stat
9529 0.9203073862E+03
9530 0.9203655932E+03
9531 0.9204196679E+03
9532 0.9204798224E+03
9533 0.9205345088E+03
9534 0.9205939773E+03
9535 0.9206496257E+03
9536 0.9207078335E+03
9537 0.9207647118E+03
9538 0.9208231441E+03
The end time step is 26280, so I killed the process.
This slow computing may be due to MY_TRC. I will check.
model stops running after tra_nxt
On Nestor, the biogeochemical model with MY_TRC stops after this step in ocean.output:
tra_nxt : achieve the time stepping by Asselin filter and array swap
~~~~~~~
tracer.stat only has the output for the first time step.
1 0.7887250137E+03
This problem still occurs after recompiling the model without MY_TRC, so it's not the problem with MY_TRC.
I copied the source code for TOP_SRC that works on Jasper to Nestor, and tested (b4_tra_nxt_ref_cmoc06.7/EXP01). But the problem still exists. Thus, this might be specific to Nestor.
I increased from procs=12 to =24, but still the model hangs at "tra_nxt".
UPDATE on Oct.16
This issue is fixed. Although I cannot be 100% certain (because of my forgetfulness), this problem might have been due to the compiler option not being correctly set. In particular in arch-nestor_hh.fcm:
%FC mpif90 -r8 -i4 -fp-model source
The bolded flags were missing. I should double-check this.
tra_nxt : achieve the time stepping by Asselin filter and array swap
~~~~~~~
tracer.stat only has the output for the first time step.
1 0.7887250137E+03
This problem still occurs after recompiling the model without MY_TRC, so it's not the problem with MY_TRC.
I copied the source code for TOP_SRC that works on Jasper to Nestor, and tested (b4_tra_nxt_ref_cmoc06.7/EXP01). But the problem still exists. Thus, this might be specific to Nestor.
I increased from procs=12 to =24, but still the model hangs at "tra_nxt".
UPDATE on Oct.16
This issue is fixed. Although I cannot be 100% certain (because of my forgetfulness), this problem might have been due to the compiler option not being correctly set. In particular in arch-nestor_hh.fcm:
%FC mpif90 -r8 -i4 -fp-model source
The bolded flags were missing. I should double-check this.
2016年10月11日火曜日
makenemo fails in nestor
I can compile (./makenemo) in Japer, while I cannot in Nestor. With Nestor, I get the following error:
./makenemo -m ifort_hpcnestor_pisces_x -n physics_only_naa
mpif90 -o nemo.exe /home/hhayashi/nemo_eric/CONFIG/physics_only_naa/BLD/obj/nemo.o -L/home/hhayashi/nemo_eric/CONFIG/physics_only_naa/BLD/lib -l__fcm__nemo -shared-intel -mcmodel=medium -L/global/software/netcdf-4.1.3/lib -lnetcdf -lnetcdff
ld: cannot find -ltorque
fcm_internal load failed (256)
gmake: *** [nemo.exe] Error 1
gmake -f /home/hhayashi/nemo_eric/CONFIG/physics_only_naa/BLD/Makefile -j 1 all failed (2) at /g01/home/hhayashi/nemo_eric/EXTERNAL/fcm/bin/../lib/Fcm/Build.pm line 597
torque library does exist:
ldconfig -p | grep libtorque
libtorque.so.2 (libc6,x86-64) => /opt/torque-2.5.13/lib/libtorque.so.2
libtorque.so (libc6,x86-64) => /opt/torque-2.5.13/lib/libtorque.so
After some googling, I found the solution (ref: http://stackoverflow.com/a/16710346). I had to explicitly locate the library path for torque in the ARCH file (for %NCDF_LIB):
-L/opt/torque-2.5.13/lib -ltorque
./makenemo -m ifort_hpcnestor_pisces_x -n physics_only_naa
mpif90 -o nemo.exe /home/hhayashi/nemo_eric/CONFIG/physics_only_naa/BLD/obj/nemo.o -L/home/hhayashi/nemo_eric/CONFIG/physics_only_naa/BLD/lib -l__fcm__nemo -shared-intel -mcmodel=medium -L/global/software/netcdf-4.1.3/lib -lnetcdf -lnetcdff
ld: cannot find -ltorque
fcm_internal load failed (256)
gmake: *** [nemo.exe] Error 1
gmake -f /home/hhayashi/nemo_eric/CONFIG/physics_only_naa/BLD/Makefile -j 1 all failed (2) at /g01/home/hhayashi/nemo_eric/EXTERNAL/fcm/bin/../lib/Fcm/Build.pm line 597
torque library does exist:
ldconfig -p | grep libtorque
libtorque.so.2 (libc6,x86-64) => /opt/torque-2.5.13/lib/libtorque.so.2
libtorque.so (libc6,x86-64) => /opt/torque-2.5.13/lib/libtorque.so
After some googling, I found the solution (ref: http://stackoverflow.com/a/16710346). I had to explicitly locate the library path for torque in the ARCH file (for %NCDF_LIB):
-L/opt/torque-2.5.13/lib -ltorque
2016年10月10日月曜日
blow up issue solved!
tracer.stat indicates undesirable increase in concentrations of some state variable(s) at the 7th time step:
1 0.7887250195E+03
2 0.7887190683E+03
3 0.8475861522E+03
4 0.9302648776E+03
5 0.1044380477E+04
6 0.1170169324E+04
7 0.2228768128E+11
8 0.2853781480E+11
9 0.5271543990E+11
10 0.7216373356E+11
After commenting and uncommenting source/sink terms in the equations, I discovered that this was due to fmort1 and fmort2 not property flushed into the water column:
tra(ji,jj,1,jpgoc) = tra(ji,jj,1,jpgoc) &
! source1: flushing
+ (1.-f_p2)*flushdia*(1.-frld(ji,jj))*z_ia/e3t(ji,jj,1) &
! source2: linear mortality
+0.
! + 0.7*fmort1 &
! source3: quadratic mortality
! + fmort2
After commenting fmort1 and fmort2 terms above, tracer.stat looks more reasonable:
1 0.7887289636E+03
2 0.7887230246E+03
3 0.7890159536E+03
4 0.7902228379E+03
5 0.7905440777E+03
6 0.7908932570E+03
7 0.7910913500E+03
8 0.7913530813E+03
9 0.7915290453E+03
10 0.7917645388E+03
1 0.7887250195E+03
2 0.7887190683E+03
3 0.8475861522E+03
4 0.9302648776E+03
5 0.1044380477E+04
6 0.1170169324E+04
7 0.2228768128E+11
8 0.2853781480E+11
9 0.5271543990E+11
10 0.7216373356E+11
After commenting and uncommenting source/sink terms in the equations, I discovered that this was due to fmort1 and fmort2 not property flushed into the water column:
tra(ji,jj,1,jpgoc) = tra(ji,jj,1,jpgoc) &
! source1: flushing
+ (1.-f_p2)*flushdia*(1.-frld(ji,jj))*z_ia/e3t(ji,jj,1) &
! source2: linear mortality
+0.
! + 0.7*fmort1 &
! source3: quadratic mortality
! + fmort2
After commenting fmort1 and fmort2 terms above, tracer.stat looks more reasonable:
1 0.7887289636E+03
2 0.7887230246E+03
3 0.7890159536E+03
4 0.7902228379E+03
5 0.7905440777E+03
6 0.7908932570E+03
7 0.7910913500E+03
8 0.7913530813E+03
9 0.7915290453E+03
10 0.7917645388E+03
2016年10月6日木曜日
generating meteo file for amundsen
for month in `seq 1 12`;do wget --content-disposition "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=1633&Year=2008&Month=${month}&Day=14&timeframe=1&submit= Download+Data" ;done
Other stations in the Amundsen Gulf area:
2016年10月5日水曜日
When all three my_trc components are added, the tracer.stat increases too rapidly:
tail tracer.stat
1 0.7895863787E+03
2 0.3124101036E+04
3 0.1067765821E+13
4 0.2180091013E+27
5 0.8173235638E+41
This still occurs even I remove the icedms component:
1 0.7895863787E+03
2 0.3056491046E+04
3 0.1034367164E+13
4 0.2180090884E+27
5 0.8173235638E+41
When I removed the chunk of code about ice ablation/accretion, the tracer.stat became normal:
1 0.7895862638E+03
2 0.7895870356E+03
3 0.7913792876E+03
4 0.7936409329E+03
5 0.7949360900E+03
Thus, the problem is here:
! Lateral accretion/ablation
IF(frld(ji,jj)-pfrld(ji,jj) > 0.) THEN ! In case of accretion, newly formed ice will have a tracer concentration of an underlying seawater value.
icechl(ji,jj) = (1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon10)*icechl(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon10)*trn(ji,jj,1,jpdch)
iceno3(ji,jj) = (1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon10)*iceno3(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon10)*trn(ji,jj,1,jpno3)
icenh4(ji,jj) = (1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon10)*icenh4(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon10)*trn(ji,jj,1,jpnh4)
END IF
tail tracer.stat
1 0.7895863787E+03
2 0.3124101036E+04
3 0.1067765821E+13
4 0.2180091013E+27
5 0.8173235638E+41
This still occurs even I remove the icedms component:
1 0.7895863787E+03
2 0.3056491046E+04
3 0.1034367164E+13
4 0.2180090884E+27
5 0.8173235638E+41
When I removed the chunk of code about ice ablation/accretion, the tracer.stat became normal:
1 0.7895862638E+03
2 0.7895870356E+03
3 0.7913792876E+03
4 0.7936409329E+03
5 0.7949360900E+03
Thus, the problem is here:
! Lateral accretion/ablation
IF(frld(ji,jj)-pfrld(ji,jj) > 0.) THEN ! In case of accretion, newly formed ice will have a tracer concentration of an underlying seawater value.
icechl(ji,jj) = (1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon10)*icechl(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon10)*trn(ji,jj,1,jpdch)
iceno3(ji,jj) = (1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon10)*iceno3(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon10)*trn(ji,jj,1,jpno3)
icenh4(ji,jj) = (1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon10)*icenh4(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon10)*trn(ji,jj,1,jpnh4)
END IF
2016年10月4日火曜日
interesting segfault
I had a segmentation fault which is due to NaN. However, this was interesting because although the IF statement was always false, it gave me segfault because NaN would result if the IF statement was true..
write(numout,*) 'hhpass: nancheck ',(1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon30)
IF(frld(ji,jj)-pfrld(ji,jj) > 0.) THEN ! In case of accretion, newly formed ice will have a tracer concentration of an underlying seawater value.
write(numout,*) 'hhpass: inside if statement'
icechl(ji,jj) = icechl(ji,jj)/(trn(ji,jj,1,jpdch)+epsilon30)!((1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon30)*icechl(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon30)*trn(ji,jj,1,jpdch)
END IF
Update: This seems to have to do with floating points.
Writing out the output for the variables in dinomenator and numerator show a particular floating point (i.e. 14):
hhpass: pfrld 1.00000000000000
hhpass: frld 1.00000000000000
In the example above, I tried avoid the division by zero by introducing epsilon number 30 (i.e. 1.e-30), but the division produced NaN:
hhpass: nancheck NaN
After this, I instead introduced a epsilon number that is smaller than the floating point of the two variables (i.e. 4). Then the division produced zero:
hhpass: nancheck 0.000000000000000E+000
Epsilon30 worked for biogeochemical tracers. But not for ice physical tracers (probably a specic floating point number is defined in LIM_SRC_2 for these tracers?)
write(numout,*) 'hhpass: nancheck ',(1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon30)
IF(frld(ji,jj)-pfrld(ji,jj) > 0.) THEN ! In case of accretion, newly formed ice will have a tracer concentration of an underlying seawater value.
write(numout,*) 'hhpass: inside if statement'
icechl(ji,jj) = icechl(ji,jj)/(trn(ji,jj,1,jpdch)+epsilon30)!((1.-pfrld(ji,jj))/(1.-frld(ji,jj)+epsilon30)*icechl(ji,jj) + ((1.-frld(ji,jj))-(1.-pfrld(ji,jj)))/(1.-frld(ji,jj)+epsilon30)*trn(ji,jj,1,jpdch)
END IF
Update: This seems to have to do with floating points.
Writing out the output for the variables in dinomenator and numerator show a particular floating point (i.e. 14):
hhpass: pfrld 1.00000000000000
hhpass: frld 1.00000000000000
In the example above, I tried avoid the division by zero by introducing epsilon number 30 (i.e. 1.e-30), but the division produced NaN:
hhpass: nancheck NaN
After this, I instead introduced a epsilon number that is smaller than the floating point of the two variables (i.e. 4). Then the division produced zero:
hhpass: nancheck 0.000000000000000E+000
Epsilon30 worked for biogeochemical tracers. But not for ice physical tracers (probably a specic floating point number is defined in LIM_SRC_2 for these tracers?)
2016年10月3日月曜日
NaN in tracer.stat --> found the problem!
I used to see NaN in the second time step of tracer.stat. In the first time step you dont see Nan because it is the initial condition.
In the second time step, seawater NO3 was NaN because it was exchanging the NaN value coming from the sea ice NO3, which was due to the lack of epsilon number.
For example, the sink for sea ice NO3 due to ice algal uptake is calculated based on the ratio of NO3 and all other available nitrogen (i.e. NO3 and NH4). When NO3 and NH4 are initially set to 0 in the model (for now), the denominator becomes zero, therefore the sink = NaN, therefore the ice NO3 in the next time step as well as the oceanic NO3 became NaN.
So I fixed this by adding a very small epsilon number to avoid this division by zero.
In the second time step, seawater NO3 was NaN because it was exchanging the NaN value coming from the sea ice NO3, which was due to the lack of epsilon number.
For example, the sink for sea ice NO3 due to ice algal uptake is calculated based on the ratio of NO3 and all other available nitrogen (i.e. NO3 and NH4). When NO3 and NH4 are initially set to 0 in the model (for now), the denominator becomes zero, therefore the sink = NaN, therefore the ice NO3 in the next time step as well as the oceanic NO3 became NaN.
So I fixed this by adding a very small epsilon number to avoid this division by zero.
interpolate seawifs onto NAA grid
1) Download the monthly climatology data:
wget http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19972442010273.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19972742010304.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19973052010334.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19973352010365.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980012010031.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980322010059.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980602010090.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980912010120.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19981212010151.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19981522010181.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19981822010212.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19982132010243.L3m_MC_CHL_chlor_a_9km.nc
2) Rename the data so that they will appear in the order of months (i.e. from Jan. to Dec.):
for i in $(ls *_MC_*.nc); do `echo "cp $i tmp${i:5}"`;done
ls tmp*.nc
tmp0012010031.L3m_MC_CHL_chlor_a_9km.nc
tmp0322010059.L3m_MC_CHL_chlor_a_9km.nc
tmp0602010090.L3m_MC_CHL_chlor_a_9km.nc
tmp0912010120.L3m_MC_CHL_chlor_a_9km.nc
tmp1212010151.L3m_MC_CHL_chlor_a_9km.nc
tmp1522010181.L3m_MC_CHL_chlor_a_9km.nc
tmp1822010212.L3m_MC_CHL_chlor_a_9km.nc
tmp2132010243.L3m_MC_CHL_chlor_a_9km.nc
tmp2442010273.L3m_MC_CHL_chlor_a_9km.nc
tmp2742010304.L3m_MC_CHL_chlor_a_9km.nc
tmp3052010334.L3m_MC_CHL_chlor_a_9km.nc
tmp3352010365.L3m_MC_CHL_chlor_a_9km.nc
3) Create a new variable called time to each data:
for i in $(ls tmp*9km.nc); do ncap2 -O -s 'time=array(1.,1.,1.)' $i vartime_$i; done
4) Create an unlimited (RECORD) dimension called "time" to each data:
for i in $(ls vartime_*.nc); do ncecat -u time $i dimtime_$i; done
5) Concatenate the files:
ncrcat -h dimtime_*.nc seawifs_mon_clim.nc
6) Finally use SOSIE to interpolate onto NAA grid:
~/sosie-2.6.4/bin/sosie.x -f namelist.seawifs2naa
wget http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19972442010273.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19972742010304.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19973052010334.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19973352010365.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980012010031.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980322010059.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980602010090.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19980912010120.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19981212010151.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19981522010181.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19981822010212.L3m_MC_CHL_chlor_a_9km.nc http://oceandata.sci.gsfc.nasa.gov/cgi/getfile/S19982132010243.L3m_MC_CHL_chlor_a_9km.nc
2) Rename the data so that they will appear in the order of months (i.e. from Jan. to Dec.):
for i in $(ls *_MC_*.nc); do `echo "cp $i tmp${i:5}"`;done
ls tmp*.nc
tmp0012010031.L3m_MC_CHL_chlor_a_9km.nc
tmp0322010059.L3m_MC_CHL_chlor_a_9km.nc
tmp0602010090.L3m_MC_CHL_chlor_a_9km.nc
tmp0912010120.L3m_MC_CHL_chlor_a_9km.nc
tmp1212010151.L3m_MC_CHL_chlor_a_9km.nc
tmp1522010181.L3m_MC_CHL_chlor_a_9km.nc
tmp1822010212.L3m_MC_CHL_chlor_a_9km.nc
tmp2132010243.L3m_MC_CHL_chlor_a_9km.nc
tmp2442010273.L3m_MC_CHL_chlor_a_9km.nc
tmp2742010304.L3m_MC_CHL_chlor_a_9km.nc
tmp3052010334.L3m_MC_CHL_chlor_a_9km.nc
tmp3352010365.L3m_MC_CHL_chlor_a_9km.nc
3) Create a new variable called time to each data:
for i in $(ls tmp*9km.nc); do ncap2 -O -s 'time=array(1.,1.,1.)' $i vartime_$i; done
4) Create an unlimited (RECORD) dimension called "time" to each data:
for i in $(ls vartime_*.nc); do ncecat -u time $i dimtime_$i; done
5) Concatenate the files:
ncrcat -h dimtime_*.nc seawifs_mon_clim.nc
6) Finally use SOSIE to interpolate onto NAA grid:
~/sosie-2.6.4/bin/sosie.x -f namelist.seawifs2naa
登録:
投稿 (Atom)