- 积分
- 68
- 贡献
-
- 精华
- 在线时间
- 小时
- 注册时间
- 2014-10-22
- 最后登录
- 1970-1-1
|
登录后查看更多精彩内容~
您需要 登录 才可以下载或查看,没有帐号?立即注册
x
各位,本人刚学CESM1.0.3模式,build成功之后,submit the $CASE.$MACH.run script 时候在ccsm.log报错,具体如下:
(seq_io_init) pio init parameters: before nml read
(seq_io_init) pio_stride = -99
(seq_io_init) pio_root = -99
(seq_io_init) pio_typename = nothing
(seq_io_init) pio_numtasks = -99
(seq_io_init) pio_debug_level = 0
pio_async_interface = F
(seq_io_init) pio init parameters: after nml read
(seq_io_init) pio_stride = -1
(seq_io_init) pio_root = 1
(seq_io_init) pio_typename = netcdf
(seq_io_init) pio_numtasks = -1
(seq_io_init) pio init parameters:
(seq_io_init) pio_stride = 4
(seq_io_init) pio_root = 1
(seq_io_init) pio_typename = NETCDF
(seq_io_init) pio_numtasks = 27
(seq_io_init) pio_debug_level = 0
pio_async_interface = F
(seq_comm_setcomm) initialize ID ( 7 GLOBAL ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)
(seq_comm_setcomm) initialize ID ( 2 ATM ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)
(seq_comm_setcomm) initialize ID ( 1 LND ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)
(seq_comm_setcomm) initialize ID ( 4 ICE ) pelist = 0 99 1 ( npes = 100) ( nthreads = 1)
(seq_comm_setcomm) initialize ID ( 5 GLC ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)
(seq_comm_setcomm) initialize ID ( 3 OCN ) pelist = 0 99 1 ( npes = 100) ( nthreads = 1)
(seq_comm_setcomm) initialize ID ( 6 CPL ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)
(seq_comm_joincomm) initialize ID ( 8 CPLATM ) join IDs = 6 2 ( npes = 108) ( nthreads = 1)
(seq_comm_joincomm) initialize ID ( 9 CPLLND ) join IDs = 6 1 ( npes = 108) ( nthreads = 1)
(seq_comm_joincomm) initialize ID ( 10 CPLICE ) join IDs = 6 4 ( npes = 108) ( nthreads = 1)
(seq_comm_joincomm) initialize ID ( 11 CPLOCN ) join IDs = 6 3 ( npes = 108) ( nthreads = 1)
(seq_comm_joincomm) initialize ID ( 12 CPLGLC ) join IDs = 6 5 ( npes = 108) ( nthreads = 1)
[c06n04:31992] *** An error occurred in MPI_Gather
[c06n04:31992] *** on communicator MPI COMMUNICATOR 5 CREATE FROM 0
[c06n04:31992] *** MPI_ERR_TYPE: invalid datatype
[c06n04:31992] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 74 with PID 31963 on
node c06n07 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[c06n05:04048] 107 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[c06n05:04048] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
mpi是openmpi1.4.3_ifort11.1,netcdf是NetCDF4.1.3.ifort11.1,编译器为intel/composer_xe_2011_sp1.7.256,
在.bashrc中设置了MPI和netcdf环境路径,也在CESM移植中设置了相应路径。build是成功的,但向节点提交任务后就报错,不知道报错问题何在?
|
|