locked
error when using Mpi_reduce RRS feed

  • Question

  • Hello,

    When the program runs more than three processes, it gives an error:

     

    • Assertion failed in file helper_fns.c at line 337: 0 memcpy argument memory ranges overlap, dst_=0x8accf0 src_=0x8ac848 len_=1200 internal ABORT - process 1 rank 1 in job 46 n1.blades.cluster caused collective abort of all ranks exit status of rank 1: killed by signal 9

    I found the problem with using MPI_Reduce, without MPI_Reduce program works on any number of processes without any problems, then I have only used code from MPI_Reduce:

     

     

    #include <iostream>
    #include <mpi.h>
    #include <stdlib.h>
    using namespace std;
     
     
    int Size = 100;
     
     
     
    int main (int argc, char *argv[])
    {
      
      int rank, size;
     
     
      MPI_Init (&argc, &argv);      /* starts MPI */
      MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
      MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
        
        double *ms_total = new double[Size];
        double *ms_new = new double[Size];
     
     
      MPI_Bcast (&Size, 1, MPI_INT, 0, MPI_COMM_WORLD);
        srand(1);
      
      for(int i=0;i<Size;i++)
      ms_new[i]=rand()%100;
      
    MPI_Barrier(MPI_COMM_WORLD);
      for( int i =0; i < Size; i++)
         {
          MPI_Reduce(&ms_new[i], &ms_total[i], Size, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
          }
          
          if(rank==0)
          {
    for(int i=0; i< Size; i++)
    cout << rank <<"  "<<ms_total[i]<<endl; 
         }
     
     
     
      std::cout <<  "Hello world from process" << rank << "\n";
      MPI_Finalize();
      return 0;
    }
     
    

     Now gives the following error:

     

    • Assertion failed in file helper_fns.c at line 337: 0 memcpy argument memory ranges overlap, dst_=0x8ce010 src_=0x8cdcf8 len_=800 internal ABORT - process 1 Fatal error in PMPI_Reduce: Other MPI error, error stack: PMPI_Reduce(1198).................: MPI_Reduce(sbuf=MPI_IN_PLACE, rbuf=0xfbc9c8, count=100, MPI_DOUBLE, MPI_SUM, root=0, MPI_COMM_WORLD) failed MPIR_Reduce(764)..................: MPIR_Reduce_binomial(172).........: MPIC_Recv(83).....................: MPIC_Wait(513)....................: MPIDI_CH3I_Progress(150)..........: MPID_nem_mpich2_blocking_recv(948): MPID_nem_tcp_connpoll(1720).......: state_commrdy_handler(1556).......: MPID_nem_tcp_recv_handler(1446)...: socket closed rank 1 in job 13 n1.blades.cluster caused collective abort of all ranks exit status of rank 1: return code 1

     

    What is this file helper_fns.c? and where I made a mistake in the program??

     

    thanks in advance

    Sunday, January 22, 2012 7:45 AM

All replies

  • I found a bug in the program

    #include <iostream>
    #include <mpi.h>
    #include <stdlib.h>
    using namespace std;
     
     
    int Size = 100;
     
     
     
    int main (int argc, char *argv[])
    {
      
      int rank, size;
     
     
      MPI_Init (&argc, &argv);      /* starts MPI */
      MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
      MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
        
        double *ms_total = new double[Size];
        double *ms_new = new double[Size];
     
     
      MPI_Bcast (&Size, 1, MPI_INT, 0, MPI_COMM_WORLD);
        srand(1);
      
      for(int i=0;i<Size;i++)
      ms_new[i]=rand()%100;
      
    MPI_Barrier(MPI_COMM_WORLD);
     
          MPI_Reduce(ms_new, ms_total, Size, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
          
          
          if(rank==0)
          {
    for(int i=0; i< Size; i++)
    cout << rank <<"  "<<ms_total[i]<<endl; 
         }
     
     
     
      std::cout <<  "Hello world from process" << rank << "\n";
      MPI_Finalize();
      return 0;
    }
     
    

    Sunday, January 22, 2012 8:48 AM
  • For questions related to MPI (Message Passing Interface), the "Windows HPC Server Message Passing Interface (MPI)" forum at http://forums.community.microsoft.com/en-US/windowshpcmpi/threads/ is a good place to ask them.

    Cheers

    Daniel


    http://www.danielmoth.com/Blog/
    Sunday, January 22, 2012 10:44 PM