Work Note
 

Dec. 2002 To-dos:

1. λRAM - Benchmark Experiments;

2. Further studying: DSM, VTK Interaction, Data Mining, OO Programming;

3. Interaction in VTK;

Mon Tue Wed Thu Fri Sat Sun
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

- Final Week

 

Dec. 19, Thursday
 
  • Study note - Chapter 13 MMAP and DMA (Linux Device Drivers)
  • * Three section: 1) implementation of mmap; 2) kiobuf mechanism  and 3) DMA I/O operations

    * Three level page table in Linux: PGD (page directory), PMD (page mid-level Directory) and Page Table - referred in <asm/page.h> and <asm/pgtable.h> How to map virtual address to physical memory

     

     

    Dec. 19, Thursday
     
  • λRAM vs MMAP:
  • 1) λRAM borrows some idea from DSM, more complicated than mmap

    2) λRAM is a multi-level architecture

    3) Latency is the more fatal to λRAM, since λRAM fetch thru NIC from remote server

     

    More detail needed; but mmap is a good analog.

    Reference - Linux Device Drivers, 2nd Edition Chapter 13 - mmap and DMA.

     

     

    Dec. 18, Wednesday
     
  • The difference between DSM and λRAM is like that between virtual memory allocation and file access thru process address:
  • 1) virtual memory allocation uses malloc(); the latter - mmap()

    2) DSM is just a mechanism which handles the distributed memory on cluster as unified address space; besides that, λRAM should prefecth the data from remote server on the other side in the WAN.

    3) no write operation in λRAM. how will λRAM work if writing is required? - so that it is the almost the same as DSM which will allocate virtual address space.

     

    To sum up, DSM provides a block of memory (physical and virtual) for the processes; λRAM generates a mapping from network transfer pipeline to address space.

     

    λRAM should work like mmap:

    mmap - void* mmap(void*, size_t, ....)

    LamdaRAM - LamdaRAM* LamdaMap(void*, size_t, ....)

     

     

    Reference

    1. DSM -

    2. Computer Architecture - A Quantitative Approach, Ch8.4

     

     

    Dec. 17, Tuesday
     
  • Using MPI_Probe instead of MPI_Recv
  • mmap remote data service as virtual memory: i.e. void* mmap(MemSize, IP, Transfer) ?
  •  

     

    Dec. 16, Monday
     
  • Change the prefetching function
  • Design hit rate
  • Input QUANTA into code
  •  

     

    Dec. 06, Friday
     
  • Hit rate in WS
  • Hit rate in λRAM
  • Performance vs. Page Size
  •  

     

    Dec. 05, Thursday
     
  • Work on Scylla and Charybdis.
  • Scylla is OmniNet;
  •  

     

    Dec. 04, Wednesday
     
  • traceroute charybdis.sl.startap.net to check the route path. If the path has 7/8 stops, it's correct.
  • Since the NIC of the master doesn't have GigE, I've to transfer data from slave.
  • Bisheng and Scylla are on OmniNet. The master nodes on both cluster don't have GigE connection. But the slaves have.
  •  

     

    Dec. 03, Tuesday
     
  • Server on SARA: 195.169.124.47
  • Charybdis' IP: 206.220.241.20/24
  •  

     

    Dec. 02, Monday
     
  • Setup experiments
  • LAC's Cluster on Starlight: 206.220.241.13-16 / 40-42
  • Sending data by QUANTA network API instead of SABUL to transfer data
  • 1. Using TCPServer to listen the Query request

    2. Using TCPClient to send query request

    3. Using rbudpSender to send data

    4. Using rbudpReceive to receive data

     

     

    Updated on Dec. 19 by Charles Zhang


    [ home | bio | resume | projects | hobbies | links | feedback ]