Professional Web Applications Themes

removing duplicate lines - PERL Beginners

I am writing a Perl script to automatically generate a netlogon.bat file for Samba whenever a user logs onto a domain. The only parameter that is passes to it is the username. My problem is that different groups get some of the same mappings. What I really need to do is filter out duplicate lines in the finished output. I tried piping the output through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions? #!/usr/bin/perl my $user = shift; my $drives = {F => "NET USE F: \\SKYLINE\SKYLINEF\r\n", H => "NET USE H: \\SKYLINE\SHARE\r\n", I => "NET ...

  1. #1

    Default removing duplicate lines

    I am writing a Perl script to automatically generate a netlogon.bat file for Samba
    whenever a user logs onto a domain. The only parameter that is passes to it is the
    username. My problem is that different groups get some of the same mappings. What I really
    need to do is filter out duplicate lines in the finished output. I tried piping the output
    through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions?

    #!/usr/bin/perl

    my $user = shift;
    my $drives = {F => "NET USE F: \\\\SKYLINE\\SKYLINEF\r\n",
    H => "NET USE H: \\\\SKYLINE\\SHARE\r\n",
    I => "NET USE I: \\\\SHIPPING1\\INVENTORY\r\n",
    M => "NET USE M: \\\\SKYLINE\\SKYLINEM\r\n",
    S => "NET USE S: \\\\SHIPPING1\\SHOP\r\n",
    Y => "NET USE Y: \\\\ACCOUNTING\\FLTSCHOOL\r\n",
    Z => "NET USE Z: \\\\ACCOUNTING\\MAINT\r\n"};
    my $which = {accounting => "F H I M S Y Z", mech => "I M S Z", dispatch => "M",
    instructors => "M"};
    my $groups = `cat /etc/group | grep ${user} | cut -d ':' -f 1`;
    $groups =~ s/\n/\:/sg;

    # Start generating logon script
    #open LOGON, ">/usr/local/samba/netlogon/${user}.bat";
    open LOGON, ">/tmp/${user}.bat";
    print LOGON "\ECHO OFF\r\n";

    foreach $group (split /:/, $groups) {
    foreach $drive (split / /, $which->{$group}) {
    print LOGON $drives->{$drive};
    }
    }

    close LOGON;

    --
    Andrew Gaffney

    Andrew Gaffney Guest

  2. #2

    Default Re: removing duplicate lines

    Andrew Gaffney wrote:
    >
    > I am writing a Perl script to automatically generate a netlogon.bat file for Samba
    > whenever a user logs onto a domain. The only parameter that is passes to it is the
    > username. My problem is that different groups get some of the same mappings. What I really
    > need to do is filter out duplicate lines in the finished output. I tried piping the output
    > through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions?
    [snip code]

    Hi Andrew.

    The quick answer is:

    perldoc -q dupl

    If you need any more then ask again :)

    Rob


    Rob Dixon Guest

  3. #3

    Default Re: removing duplicate lines

    Rob Dixon wrote:
    > Andrew Gaffney wrote:
    >
    >>I am writing a Perl script to automatically generate a netlogon.bat file for Samba
    >>whenever a user logs onto a domain. The only parameter that is passes to it is the
    >>username. My problem is that different groups get some of the same mappings. What I really
    >>need to do is filter out duplicate lines in the finished output. I tried piping the output
    >>through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions?
    >
    >
    > [snip code]
    >
    > Hi Andrew.
    >
    > The quick answer is:
    >
    > perldoc -q dupl
    >
    > If you need any more then ask again :)
    I was able to indirectly get the answer from that. Reading that, I realized that if I run
    my output through 'sort' and then 'uniq' or even just 'sort -u', it does what I want it to do.

    --
    Andrew Gaffney

    Andrew Gaffney Guest

  4. #4

    Default Re: removing duplicate lines

    Andrew Gaffney wrote:
    >
    > Rob Dixon wrote:
    > >
    > > Andrew Gaffney wrote:
    > >
    > > > I am writing a Perl script to automatically generate a netlogon.bat file for Samba
    > > > whenever a user logs onto a domain. The only parameter that is passes to it is the
    > > > username. My problem is that different groups get some of the same mappings. What I really
    > > > need to do is filter out duplicate lines in the finished output. I tried piping the output
    > > > through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions?
    > >
    > >
    > > [snip code]
    > >
    > > Hi Andrew.
    > >
    > > The quick answer is:
    > >
    > > perldoc -q dupl
    > >
    > > If you need any more then ask again :)
    >
    > I was able to indirectly get the answer from that. Reading that,
    > I realized that if I run my output through 'sort' and then 'uniq'
    > or even just 'sort -u', it does what I want it to do.
    I'm reluctant to let this go, but a 'proper' answer would have to be
    of the, "I wouldn't start from here, " type. You've used Perl as
    a scripting language, which it isn't. Perl's very good at doing anything
    you need on a platform-independent basis, and shelling out with 'system'
    calls or backticks is almost never necessary and makes the whole program
    platform and shell-specific.

    If this is even a semi-permanent piece of software then, if I were
    you, I would let the group blitz it just to show you what can be done.
    You might even want to see that anyway as a learning exercise.

    HTH,

    Rob


    Rob Dixon Guest

  5. #5

    Default Re: removing duplicate lines

    Andrew Gaffney wrote:
    >
    > I am writing a Perl script to automatically generate a netlogon.bat file for Samba
    > whenever a user logs onto a domain. The only parameter that is passes to it is the
    > username. My problem is that different groups get some of the same mappings. What I really
    > need to do is filter out duplicate lines in the finished output.
    Whenever you want unique values think "hash".

    > I tried piping the output
    > through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions?
    Your example does not have 'uniq' (or 'sort -u') in it so I am not sure
    what you are trying to do.

    > #!/usr/bin/perl
    use warnings;
    use strict;
    > my $user = shift;
    > my $drives = {F => "NET USE F: \\\\SKYLINE\\SKYLINEF\r\n",
    > H => "NET USE H: \\\\SKYLINE\\SHARE\r\n",
    > I => "NET USE I: \\\\SHIPPING1\\INVENTORY\r\n",
    > M => "NET USE M: \\\\SKYLINE\\SKYLINEM\r\n",
    > S => "NET USE S: \\\\SHIPPING1\\SHOP\r\n",
    > Y => "NET USE Y: \\\\ACCOUNTING\\FLTSCHOOL\r\n",
    > Z => "NET USE Z: \\\\ACCOUNTING\\MAINT\r\n"};
    Why not just use a hash instead of a reference to a hash? The use of
    "\r\n" is non-portable, you should use "\015\012" instead.

    my $CRLF = "\015\012";

    my %drives = (
    F => 'NET USE F: \\SKYLINE\SKYLINEF' . $CRLF,
    H => 'NET USE H: \\SKYLINE\SHARE' . $CRLF,
    I => 'NET USE I: \\SHIPPING1\INVENTORY' . $CRLF,
    M => 'NET USE M: \\SKYLINE\SKYLINEM' . $CRLF,
    S => 'NET USE S: \\SHIPPING1\SHOP' . $CRLF,
    Y => 'NET USE Y: \\ACCOUNTING\FLTSCHOOL' . $CRLF,
    Z => 'NET USE Z: \\ACCOUNTING\MAINT' . $CRLF,
    );

    > my $which = {accounting => "F H I M S Y Z", mech => "I M S Z", dispatch => "M",
    > instructors => "M"};
    You should probably use a hash of arrays for this (so you don't have to
    split the string later):

    my %which = (
    accounting => [ qw(F H I M S Y Z) ],
    mech => [ qw(I M S Z) ],
    dispatch => [ qw(M) ],
    instructors => [ qw(M) ],
    );

    > my $groups = `cat /etc/group | grep ${user} | cut -d ':' -f 1`;
    Ick, ick, ick! Perl provides built-in functions to access /etc/group
    and /etc/passwd

    perldoc -f getgrnam
    perldoc -f getgrgid
    perldoc -f getgrent
    perldoc -f setgrent
    perldoc -f endgrent

    perldoc -f getpwnam
    perldoc -f getpwuid
    perldoc -f getpwent
    perldoc -f setpwent
    perldoc -f endpwent

    > $groups =~ s/\n/\:/sg;
    >
    > # Start generating logon script
    > #open LOGON, ">/usr/local/samba/netlogon/${user}.bat";
    > open LOGON, ">/tmp/${user}.bat";
    You should _ALWAYS_ verify that the file opened correctly.

    open LOGON, ">/tmp/$user.bat" or die "Cannot open /tmp/$user.bat: $!";

    > print LOGON "\ECHO OFF\r\n";
    >
    > foreach $group (split /:/, $groups) {
    > foreach $drive (split / /, $which->{$group}) {
    > print LOGON $drives->{$drive};
    > }
    > }
    >
    > close LOGON;


    John
    --
    use Perl;
    program
    fulfillment
    John W. Krahn Guest

  6. #6

    Default Re: removing duplicate lines

    John W. Krahn wrote:
    > Andrew Gaffney wrote:
    >
    >>I am writing a Perl script to automatically generate a netlogon.bat file for Samba
    >>whenever a user logs onto a domain. The only parameter that is passes to it is the
    >>username. My problem is that different groups get some of the same mappings. What I really
    >>need to do is filter out duplicate lines in the finished output.
    >
    > Whenever you want unique values think "hash".
    Well, it would have been weird to have a hash with keys named 'NET USE F:
    \\\\SKYLINE\\SKYLINEF\r\n'.
    >>I tried piping the output
    >>through 'uniq' but it only filters successive duplicate lines. Anyone have any suggestions?
    >
    > Your example does not have 'uniq' (or 'sort -u') in it so I am not sure
    > what you are trying to do.
    I mean that I tried 'cat /tmp/user.bat > uniq' after the script had run to see how it
    would work.
    >>#!/usr/bin/perl
    >
    > use warnings;
    > use strict;
    >
    >>my $user = shift;
    >>my $drives = {F => "NET USE F: \\\\SKYLINE\\SKYLINEF\r\n",
    >> H => "NET USE H: \\\\SKYLINE\\SHARE\r\n",
    >> I => "NET USE I: \\\\SHIPPING1\\INVENTORY\r\n",
    >> M => "NET USE M: \\\\SKYLINE\\SKYLINEM\r\n",
    >> S => "NET USE S: \\\\SHIPPING1\\SHOP\r\n",
    >> Y => "NET USE Y: \\\\ACCOUNTING\\FLTSCHOOL\r\n",
    >> Z => "NET USE Z: \\\\ACCOUNTING\\MAINT\r\n"};
    >
    > Why not just use a hash instead of a reference to a hash? The use of
    > "\r\n" is non-portable, you should use "\015\012" instead.
    I'm not that worried about portability since this was something I threw together for use
    on MY system with MY particular setup.
    > my $CRLF = "\015\012";
    >
    > my %drives = (
    > F => 'NET USE F: \\SKYLINE\SKYLINEF' . $CRLF,
    > H => 'NET USE H: \\SKYLINE\SHARE' . $CRLF,
    > I => 'NET USE I: \\SHIPPING1\INVENTORY' . $CRLF,
    > M => 'NET USE M: \\SKYLINE\SKYLINEM' . $CRLF,
    > S => 'NET USE S: \\SHIPPING1\SHOP' . $CRLF,
    > Y => 'NET USE Y: \\ACCOUNTING\FLTSCHOOL' . $CRLF,
    > Z => 'NET USE Z: \\ACCOUNTING\MAINT' . $CRLF,
    > );
    >
    >>my $which = {accounting => "F H I M S Y Z", mech => "I M S Z", dispatch => "M",
    >>instructors => "M"};
    >
    > You should probably use a hash of arrays for this (so you don't have to
    > split the string later):
    >
    > my %which = (
    > accounting => [ qw(F H I M S Y Z) ],
    > mech => [ qw(I M S Z) ],
    > dispatch => [ qw(M) ],
    > instructors => [ qw(M) ],
    > );
    I'll probably change this.
    >>my $groups = `cat /etc/group | grep ${user} | cut -d ':' -f 1`;
    >
    > Ick, ick, ick! Perl provides built-in functions to access /etc/group
    > and /etc/passwd
    >
    > perldoc -f getgrnam
    > perldoc -f getgrgid
    > perldoc -f getgrent
    > perldoc -f setgrent
    > perldoc -f endgrent
    >
    > perldoc -f getpwnam
    > perldoc -f getpwuid
    > perldoc -f getpwent
    > perldoc -f setpwent
    > perldoc -f endpwent
    This script wasn't much more than a quick hack, anyway. I'll work on stuff like that later.
    >>$groups =~ s/\n/\:/sg;
    >>
    >># Start generating logon script
    >>#open LOGON, ">/usr/local/samba/netlogon/${user}.bat";
    >>open LOGON, ">/tmp/${user}.bat";
    >
    > You should _ALWAYS_ verify that the file opened correctly.
    >
    > open LOGON, ">/tmp/$user.bat" or die "Cannot open /tmp/$user.bat: $!";
    I agree. I just modified existing code and didn't think about that.
    >>print LOGON "\ECHO OFF\r\n";
    >>
    >>foreach $group (split /:/, $groups) {
    >> foreach $drive (split / /, $which->{$group}) {
    >> print LOGON $drives->{$drive};
    >> }
    >>}
    >>
    >>close LOGON;
    >
    > John
    Thanks for all the suggestions.

    --
    Andrew Gaffney

    Andrew Gaffney Guest

  7. #7

    Default Re: removing duplicate lines

    On Dec 9, 2003, at 8:33 PM, Andrew Gaffney wrote:
    > John W. Krahn wrote:
    >> Andrew Gaffney wrote:
    >>> I am writing a Perl script to automatically generate a netlogon.bat
    >>> file for Samba
    >>> whenever a user logs onto a domain. The only parameter that is
    >>> passes to it is the
    >>> username. My problem is that different groups get some of the same
    >>> mappings. What I really
    >>> need to do is filter out duplicate lines in the finished output.
    >> Whenever you want unique values think "hash".
    >
    > Well, it would have been weird to have a hash with keys named 'NET USE
    > F: \\\\SKYLINE\\SKYLINEF\r\n'.
    Why is this weird? It's a string, hash keys need to be strings. Good
    fit.

    Actually, this is a very common textbook Perl idiom. The sooner you
    get the hang of tricks like this the better.

    James

    James Edward Gray II Guest

  8. #8

    Default Re: removing duplicate lines

    Andrew Gaffney wrote:
    > John W. Krahn wrote:
    > > Whenever you want unique values think "hash".
    >
    > Well, it would have been weird to have a hash with keys named 'NET USE F:
    > \\\\SKYLINE\\SKYLINEF\r\n'.
    No. It is not at all wierd to use hash for any of the puroses for which it is well-suited.
    Among of those purposes are ensuring unique .ness and testing existence. 1-valued hashes are
    extremely efficient engines fr both

    my $paths_seen = {};
    foreach $drive (keys drives) {
    my $map_command = $drive{$_};
    next if $paths_seen->{$map_command);
    assign_drive($map_command);
    $paths_seen->{$map_command) = 1;
    }

    The above assumes that you are willing to make brute-force assumptions about which mapping to a
    particular share is the appropriate one. If there are factors you wish to weigh in chosing which
    mapping to keep, you would want to call some resolving function when an element is found rather
    than simply next-ing.

    Joseph


    R. Joseph Newton Guest

Similar Threads

  1. A basic question: Removing duplicate results from Max function
    By aamircheema@gmail.com in forum MySQL
    Replies: 2
    Last Post: June 11th, 10:10 AM
  2. removing lines using onclick
    By GrahamStudio21 webforumsuser@macromedia.com in forum Macromedia Flash Actionscript
    Replies: 0
    Last Post: January 27th, 04:39 PM
  3. Removing telephone lines
    By Jodi Frye in forum Adobe Photoshop Elements
    Replies: 0
    Last Post: August 9th, 12:20 AM
  4. Removing duplicate elements from an XML file
    By Angshuman Guin in forum PERL Modules
    Replies: 1
    Last Post: August 7th, 05:13 PM
  5. Removing desktop icon from one account is removing from all accounts
    By Kyle in forum Windows Setup, Administration & Security
    Replies: 1
    Last Post: July 3rd, 03:04 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139