[Gllug] Poor scripting?!

Richard Huxton dev at archonet.com
Tue Mar 11 16:37:07 UTC 2008


John Edwards wrote:
> On Tue, Mar 11, 2008 at 04:01:54PM +0000, Richard Huxton wrote:
>> John Edwards wrote:
>>> On Tue, Mar 11, 2008 at 02:41:16PM +0000, Henrik Morsing wrote:
>>>> Hi, I've had a look and help from the #gllug folks (thanks!), does
>>>> this look like it would do the same job, called as 'rename
>>>> s/cerprod/$CRMDB/ s/medprod/$CRMDB /cerillion'?
>>> <snip> 
>>>
>>> Is there any reason you aren't using Perl's own rename program?
>>>
>>> find . -iname "*pattern*" -print0 | xargs -r0 rename 's/pattern/replacement/g'
>> Nice.
> 
> It would of course need the -type and -maxdepth options added to find,
> and the full path to the Perl supplied rename (/usr/bin/prename on
> Debian) in case there are some other rename scripts in the search
> path.
> 
> 
>>> Of course you may have too many directories for rename to process as
>>> arguments. It certain works with several thousand.
>> Ah, xargs --max-args should handle that though.
> 
> But won't that leave some directories unchanged as xargs with truncate
> the list of directories that find supplies?

Hmm - I thought it just forked another target command to handle the excess:

cd /usr/share/doc; find . -type d | xargs --max-args=3 echo `date` "$*" 
| head

Yep, or have I misunderstood what you meant?

> Anyway a test run with 'rename -n' seemed to handle 300,000+ entries.

Probably academic then :-)

> I've still no idea why the original was run twice. Maybe the
> directories were being created as it ran or the original command
> failed occasionally. Neither of which are good reasons to run this
> twice of course.

I couldn't figure that out, but it looked deliberate.

-- 
   Richard Huxton
   Archonet Ltd
-- 
Gllug mailing list  -  Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug




More information about the GLLUG mailing list