Which is better, opening and closing, or a massive file opening and a massive closing?

I'm still working on the script, though it is cleaning up even better, I
even have it running cleaner by dumping anything not matching some specs=2E=
=20
What I am trying to figure out though is this:

I have 42 places for the output to go, 1 of those is a constant dump, the
others are all based on whether or not there is data in a field=2E  If the=

data is in the field, the data writes to a file with the same name as the
data checked=2E  If not then it writes to a global catch-all=2E

<!-- snip -->

  open OUTFILE, ">/home/multifax/everyone" or die "Can't open ${infile}=2E=
out
at home: $!";
  open OUTFILE1, ">/home/multifax/102" or die "Can't open ${infile}=2Eout =
at
home: $!";

   print OUTFILE "$fields[1]\@$tmptxt\n";

   if ($fields[11] eq 102){=20
    print OUTFILE1 "$fields[1]\@$tmptxt\n";
   }

<!-- snap -->

I am wondering if it is more processor intensive to open the 42 separate
files at one time, parse the data, and then close all the files, or if I
should try to re-write the parse to open the correct file, dump the data,
and then close that file, then repeat the process=2E  I know programmatica=
lly
it is probably better to open and close the files as there would be no mor=
e
copy and pasting, but was thinking processor intensive=2E  As it is right =
now
it takes the script about 5 seconds to parse the 557 lines of data=2E

If done in a loop, would it look something like this???:

<!-- snip -->
   @filees =3D ('102','104',118');

if (grep $fields[4] eq $_, @filees) {
 open OUTFILE1, ">/home/multifax/$_" or die "Can't open $_ !";
 print OUTFILE1 "$fields[1]\@$tmptxt\n";
 close OUTFILE1;
}

<!-- snap -->


I'm trying to apply what I have learned from you guys and from breaking my=

code in the last few days in every script I am writing=2E

Thanks,
Robert

--------------------------------------------------------------------
mail2web - Check your email from the web at
http://mail2web=2Ecom/ =2E


0
LoneWolf
2/11/2004 9:08:06 PM
perl.beginners 29293 articles. 3 followers. Follow

6 Replies
354 Views

Similar Articles

[PageSpeed] 34

From: "LoneWolf@nc.rr.com" <LoneWolf@nc.rr.com>
> I am wondering if it is more processor intensive to open the 42
> separate files at one time, parse the data, and then close all the
> files, or if I should try to re-write the parse to open the correct
> file, dump the data, and then close that file, then repeat the
> process.  I know programmatically it is probably better to open and
> close the files as there would be no more copy and pasting, but was
> thinking processor intensive.  As it is right now it takes the script
> about 5 seconds to parse the 557 lines of data.

If you have this small data it would be best to  in the memory, into 
42 separate strings (in an array or hash of course!) and then at the 
end loop through them and flush their contents to the files.

If you expect more data in the future you'd better open all 42 files 
in the beginning, put their handles into an array or hash, then print 
the lines as you go and close all files at the end.

Otherwise you spend most of the time opening and closing files. IMHO 
of course.


something like

	my @filees = ('102','104',118');

	my %handles;
	foreach my $fileno (@filees) {
		my $FH;
		open $FH, '>', "/home/multifax/$fileno" or die "Can't open $fileno 
!";
		$handles{$fileno} = $FH;
	}

	...
	if (grep $fields[4] eq $_, @filees) {
		print {$handles{$_}} "$fields[1]\@$tmptxt\n";
	}
	...

	foreach my $FH{values %handles} {
		close $FH;
	}


Jenda
=========== Jenda@Krynicky.cz == http://Jenda.Krynicky.cz ==========
There is a reason for living. There must be. I've seen it somewhere.
It's just that in the mess on my table ... and in my brain
I can't find it.
					--- me

0
Jenda
2/12/2004 1:34:34 PM
=46or Quality purpouses, LoneWolf@nc.rr.com 's mail on Wednesday 11 Februar=
y=20
2004 22:08 may have been monitored or recorded as:

Hi,=20

> I'm still working on the script, though it is cleaning up even better, I
> even have it running cleaner by dumping anything not matching some specs.
> What I am trying to figure out though is this:
>
> I have 42 places for the output to go, 1 of those is a constant dump, the
> others are all based on whether or not there is data in a field.  If the
> data is in the field, the data writes to a file with the same name as the
> data checked.  If not then it writes to a global catch-all.

If I recall your last mail correctly you were opening a lot of file handles=
,=20
than running into the switch kind of thing and than closing all the files=20
again.
That was: a lot of system calls (open) to eventually write to a few of the=
=20
files in the SWITCH (the accumulated ifs) and then again a lot of sys calls=
=20
to closethem, where you have actually writen to only a few of the opend=20
files.

That sounds slow.
Wiggins allready suggested

if (grep $fields[4] =3D=3D $_, @cities) {
=A0 $tmptxt =3D $fields[10];
}
else {
=A0 $tmptxt =3D '1-' . $fields[10];
}

for the first SWITCH like construct.
=46or the second one id say, make an array of your filenames and use the co=
ntend=20
of filed[11] as index, like:

my @file_names=3Dqw (/home/multifax/everyone /home/multifax/
pack-fishbait .....);

if (defined $file_name[$fields[11] - 102]) {
  open OUTFILE ">${file_name[$fields[11] - 102]}" or die "Cant open=20
${file_name[$fields[11] - 102]}:$!";

  print OUTFILE "$fields[1]\@$tmptxt\n"; #your trailing ID
  close OUTFILE;=20
 }
else {
  open OUTFILE ">default.out" or die "Cant open default.out :$!";
  print OUTFILE "$fields[1]\@$tmptxt\n"; #your trailing ID
  close OUTFILE;=20
}

However, this assumes that you have continous values from 102 upwards in=20
$fields[11] - if not, come up with a formula that gives you the index of th=
e=20
wanted filename in @file_names depending on $fields[11]or use a hash instea=
d:

$file_name{102}=3D"whatever/filename/you.want";

I suggest you tell us, what the logic behind all these different files is, =
ie,=20
what goes where of which cause: maybe someone can come up with a hash=20
structure that incoorporates this knowledge.
Dont open and close 42 files if you only will ever print to 2 of them.

Enjoy, Wolf





0
blaum
2/13/2004 1:14:50 AM
Hi,
How can I rearrange an array in a specific order based on the order of a 
hash? Something like this:

my @a = qw(Mary John Dan);
print join "\t", @a, "\n";

my %b = ( John => 0,
Dan => 1,
Mary => 2);

print "$_ => $b{$_}\n" for (keys %b);
print "$_-$b{$_}\t" foreach sort {$b{$a} <=> $b{$b}} keys %b;

The final order for @a expect:  John Dan Mary

Thanks,

Shiping

0
shiping
2/16/2004 5:16:54 PM
Shiping Wang wrote:
> 
> Hi,

Hello,

> How can I rearrange an array in a specific order based on the order of a
> hash? Something like this:
> 
> my @a = qw(Mary John Dan);
> print join "\t", @a, "\n";
> 
> my %b = ( John => 0,
> Dan => 1,
> Mary => 2);
> 
> print "$_ => $b{$_}\n" for (keys %b);
> print "$_-$b{$_}\t" foreach sort {$b{$a} <=> $b{$b}} keys %b;
> 
> The final order for @a expect:  John Dan Mary


$ perl -le'
my @a = qw( Mary John Dan );
my %b = qw( John 0 Dan 1 Mary 2 );
print "@a";
@a = sort { $b{ $a } <=> $b{ $b } } @a;
print "@a";
'
Mary John Dan
John Dan Mary



John
-- 
use Perl;
program
fulfillment
0
krahnj
2/16/2004 9:16:13 PM
John W. Krahn wrote:

>Shiping Wang wrote:
>>=20
>> Hi,
>
>Hello,
>
>> How can I rearrange an array in a specific order based on the order of a
>> hash? Something like this:
>>=20
>> my @a =3D qw(Mary John Dan);
>> print join "\t", @a, "\n";
>>=20
>> my %b =3D ( John =3D> 0,
>> Dan =3D> 1,
>> Mary =3D> 2);
>>=20
>> print "$_ =3D> $b{$_}\n" for (keys %b);
>> print "$_-$b{$_}\t" foreach sort {$b{$a} <=3D> $b{$b}} keys %b;
>>=20
>> The final order for @a expect:  John Dan Mary
>
>
>$ perl -le'
>my @a =3D qw( Mary John Dan );
>my %b =3D qw( John 0 Dan 1 Mary 2 );
>print "@a";
>@a =3D sort { $b{ $a } <=3D> $b{ $b } } @a;
>print "@a";
>'
>Mary John Dan
>John Dan Mary
>
Smart. But the sort pattern might be easier on the eye the array and hash a=
re not named @a and %b.

$ perl -le'
my @array =3D qw( Mary John Dan );
my %hash =3D qw( John 0 Dan 1 Mary 2 );
print "@array";
@array =3D sort { $hash{ $a } <=3D> $hash{ $b } } @array;
print "@array";
'
Mary John Dan
John Dan Mary

This is mainly for my own better understanding.

- Jan
--=20
These are my principles and if you don't like them... well, I have others. =
- Groucho Marx
0
lists
2/16/2004 11:10:51 PM
On Mon, 16 Feb 2004 11:16:54 -0600, shiping@wubios.wustl.edu (Shiping
Wang) wrote:

>Hi,
>How can I rearrange an array in a specific order based on the order of a 
>hash? Something like this:
>
>my @a = qw(Mary John Dan);
>print join "\t", @a, "\n";
>
>my %b = ( John => 0,
>Dan => 1,
>Mary => 2);
>
>print "$_ => $b{$_}\n" for (keys %b);
>print "$_-$b{$_}\t" foreach sort {$b{$a} <=> $b{$b}} keys %b;
>
>The final order for @a expect:  John Dan Mary

I just asked a similar question on perlmonks, and was
given this gem, originally devised by broquiant.
###########################################################
#!/usr/bin/perl
use warnings;
use strict;

my @unsorted = ( qw (Psdf lPik Easd aKwe SSdf eqwer scfgh Oegb rqwer T)
);
print join ',', @unsorted,"\n";

my %order = map {$_ => /[r-z]/ ? 'A' : /[a-q]/ ? 'B' : /[R-Z]/ ? 'C' :
'D'} ('a'..'z', 'A'..'Z');

my @sorted = map { substr($_ , 1) }
            sort
            map { $order{ substr($_,0,1) } . $_ } @unsorted;

print join ',', @sorted,"\n";
__END__


--
I'm not really a human, but I play one on earth.
http://zentara.net/japh.html
0
zentara
2/17/2004 2:30:55 PM
Reply:

Web resources about - Which is better, opening and closing, or a massive file opening and a massive closing? - perl.beginners

2012 Summer Olympics closing ceremony - Wikipedia, the free encyclopedia
51°32′19″N 0°01′00″W  /  51.53861°N 0.0166667°W  / 51.53861; -0.0166667 Coordinates : 51°32′19″N 0°01′00″W  /  51.53861°N 0.0166667°W  ...

Facebook Addresses Fake Accounts, Teen Users, WhatsApp And Oculus VR Closings In Form 10-Q
Facebook addressed duplicate and fake accounts , teen usage, and potential closing dates for its acquisitions of WhatsApp and Oculus VR in its ...

We Are All 'Closing Time': Why Semisonic's 1998 Hit Still Resonates
At some point, it was decided that in order for a song to be great, it has to &ldquo;stand the test of time.&rdquo; This has since become a ...

Deadline for Closing of Facebook’s WhatsApp Acquisition Pushed Back by One Year
... to certain adjustments such that the cash paid will comprise at least 25 percent of the aggregate transaction consideration. After closing, ...

CNBC's Closing Bell (@CNBCClosingBell) on Twitter
Sign in Sign up To bring you Twitter, we and our partners use cookies on our and other websites. Cookies help personalize Twitter content, tailor ...

Closing time: The 7 Immutable Laws of Sales Negotiation
A group for sales and strategic account negotiation experts stop the carousel Announcement from Closing time: The 7 Immutable Laws of Sales Negotiation ...


Pope Francis holds relics of St. Peter, as he celebrates closing 'Year of Faith' Mass - YouTube
The Year of Faith came to a close with a Papal Mass in St. Peter\'s Square. Thousands came out to the Vatican to celebrate and the Pope highlighted ...

Closing the gap? Productivity Commission shows how far there is to go
Last week, the Productivity Commission released the sixth annual report of performance assessments for the 'Closing the Gap' targets to address ...

Closing the book on the Read Around Canberra program
Popular book club Read Around Canberra will no longer be hosted by Libraries ACT, to the dismay of its loyal members.

Resources last updated: 12/10/2015 6:20:09 AM