[Gllug] Perl scripting challenge
Pete Ryland
pdr at pdr.cx
Fri Oct 6 11:46:49 UTC 2006
On 04/10/06, salsaman at xs4all.nl <salsaman at xs4all.nl> wrote:
>
> > On Wed, Oct 04, 2006 at 04:47:01PM +0100, - Tethys wrote:
> >> On 10/4/06, Simon Morris <simon.morris at cmtww.com> wrote:
> >>
> >> >Problem there is I don't know what the duplicated filenames are...
> >> >Apparently on this filesystem (I'm about to inherit a problem unless I
> >> >can deflect it away :-) ) they had a history of creating filenames
> that
> >> >are identical apart from case.
> >>
> >> No problem:
> >>
> >> #!/bin/sh
> >>
> >> find "${1:-.}" -type d -print | while read dir
> >> do
> >> preserve_case=$(ls -a1 "$dir" | wc -l)
> >> ignore_case=$(ls -a1 "$dir" | sort -f | uniq -i | wc -l)
> >> if [ "$preserve_case" != "$ignore_case" ]
> >> then
> >> echo "Directory has duplicate filenames: $dir"
> >> fi
> >> done
> >
> > But the problem is that he wants to know what the duplicated filenames
> > are, right? Knowing that there are duplicate filenames is obviously
> > much simpler.
> >
>
> My version will do that. It also uses toLOWER_uni which the perl
> guidelines recommend (to handle localised filenames).
#!/bin/bash
find ${1:-.} -type f -exec basename {} \; | sort -f | uniq -di | \
while read dupfilename
do
echo $dupfilename:
find ${1:-.} -iname "$dupfilename" -exec echo " {}" \;
done
Pete
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.lug.org.uk/pipermail/gllug/attachments/20061006/ca2662cc/attachment.html>
-------------- next part --------------
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list