Professional Web Applications Themes

cpio versus star - Sun Solaris

Normally when I want to move a directory structure from one machine to another I just use find and cpio thus : find . -depth -print | cpio -pdmV /destination Today I decided to give star a try with the following : star -c -Hexustar -acl -v . | star -xp -acl -C /destination In this case the destination is a NFS mount and the network is 100Mb full duplex with hme nics at each end. The sending server has netstats that look like so : # netstat -i -I hme0 5 input hme0 output input (Total) output packets errs ...

  1. #1

    Default cpio versus star


    Normally when I want to move a directory structure from one machine to another
    I just use find and cpio thus :

    find . -depth -print | cpio -pdmV /destination

    Today I decided to give star a try with the following :

    star -c -Hexustar -acl -v . | star -xp -acl -C /destination

    In this case the destination is a NFS mount and the network is 100Mb full
    duplex with hme nics at each end.

    The sending server has netstats that look like so :

    # netstat -i -I hme0 5
    input hme0 output input (Total) output
    packets errs packets errs colls packets errs packets errs colls
    632475 0 1000674 0 0 632532 0 1000731 0 0
    909 0 922 0 0 909 0 922 0 0
    837 0 838 0 0 837 0 838 0 0
    844 0 843 0 0 844 0 843 0 0
    896 0 896 0 0 896 0 896 0 0
    910 0 909 0 0 910 0 909 0 0
    862 0 860 0 0 862 0 860 0 0
    920 0 928 0 0 920 0 928 0 0
    895 0 895 0 0 895 0 895 0 0
    909 0 920 0 0 909 0 920 0 0
    1120 0 1502 0 0 1120 0 1502 0 0
    893 0 1064 0 0 893 0 1064 0 0
    868 0 863 0 0 868 0 863 0 0
    942 0 941 0 0 942 0 941 0 0
    917 0 917 0 0 917 0 917 0 0
    863 0 863 0 0 863 0 863 0 0


    The receiving NFS server stats look like so :

    $ netstat -i -I hme0 5
    input hme0 output input (Total) output
    packets errs packets errs colls packets errs packets errs colls
    208501090 0 198926719 0 0 208501144 0 198926773 0 0
    941 0 942 0 0 941 0 942 0 0
    971 0 972 0 0 971 0 972 0 0
    938 0 938 0 0 938 0 938 0 0
    967 0 964 0 0 967 0 964 0 0
    921 0 921 0 0 921 0 921 0 0
    913 0 913 0 0 913 0 913 0 0
    948 0 945 0 0 948 0 945 0 0
    923 0 923 0 0 923 0 923 0 0
    914 0 914 0 0 914 0 914 0 0
    919 0 919 0 0 919 0 919 0 0
    1194 0 1054 0 0 1194 0 1054 0 0
    1071 0 957 0 0 1071 0 957 0 0


    This is really slow. I would expect much higher throughput from star than
    from cpio.

    The sending NFS client is on at 100Mb/sec full duplex thus :

    # /usr/sbin/ndd -set /dev/hme instance 0
    # ndd -get /dev/hme link_status
    1
    # ndd -get /dev/hme link_speed
    1
    # ndd -get /dev/hme link_mode
    1


    The NFS server is tha same.


    Did I miss something?

    Dennis
    Dennis Guest

  2. #2

    Default Re: cpio versus star

    In article <Pine.GSO.4.58.0310050952580.4171blastwave>,
    Dennis Clarke <org> wrote: 

    ....
     

    You get it if you do comparable things and if you use all features from star.


    First, a better command line would look this way:

    star -time -copy -p -acl . /destination

    Then, star by default does reliable extracts while cpio does not.
    If you don't expect extraction problems (e.g. from a full NFS destination FS)
    and like to make star do also unreliable extracts. call:

    star -time -copy -no-fsync -p -acl . /destination


    It may also be better to copy _from_ NFS to a UFS partition because writing _to_
    NFS is not really fast.


    If you give star a comparable chance, you will see 10-20% better performance
    then with cpio.

    --
    EMail:isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
    tu-berlin.de (uni) If you don't have iso-8859-1
    fraunhofer.de (work) chars I am J"org Schilling
    URL: http://www.fokus.fraunhofer.de/usr/schilling ftp://ftp.berlios.de/pub/schily
    Joerg Guest

  3. #3

    Default Re: cpio versus star

    In article <Pine.GSO.4.58.0310050952580.4171blastwave>,
    Dennis Clarke <org> wrote: 

    Second posting.....sorry, but testing takes a long time.

    I did compare

    ufsdump -f - / | ufsrestore -rf -

    with

    star -copy -M -no-fsync -sp -p -time -C / . .


    The target of the copy was /export/home/tmp (on the same disk but different
    partition so there are many seeks).

    The root FS was 5.2 GB and ufsdump took 41% more wall clocl time,
    2.6x the user CPU time and 26% more system CPU time than star.

    I don't like to test with CPIO because due to a bug, it trashes time stamps
    of the source files instead setting the time stamps of the target files.....

    --
    EMail:isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
    tu-berlin.de (uni) If you don't have iso-8859-1
    fraunhofer.de (work) chars I am J"org Schilling
    URL: http://www.fokus.fraunhofer.de/usr/schilling ftp://ftp.berlios.de/pub/schily
    Joerg Guest

Similar Threads

  1. dump, tar or cpio?
    By lynkempter@hotmail.com in forum Linux / Unix Administration
    Replies: 24
    Last Post: March 28th, 09:27 AM
  2. cpio
    By cljlk in forum Linux / Unix Administration
    Replies: 0
    Last Post: April 12th, 05:26 PM
  3. CPIO Restore
    By Concerned3 Netizen in forum Linux / Unix Administration
    Replies: 2
    Last Post: January 4th, 08:32 AM
  4. Can't cpio IDS.CPI
    By Alexey Sonkin in forum Informix
    Replies: 3
    Last Post: August 1st, 06:36 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139