[SWLUG] locating bits of code in large number of files

Telsa Gwynne hobbit at aloss.ukuu.org.uk
Sat Oct 16 13:29:35 UTC 2004


On Fri, Oct 15, 2004 at 11:14:36PM +0100 or thereabouts, Neil Jones wrote:
> 
> My question is simple. Given that I have one root directory with dozens of 
> subdirectoriess below it. ( They only go one deep - there are no directories 
> within directories) How do I make a single command list the examples from all 
> the files.
> 
> All I really need is a count of the number of files with and without the code 
> as I am actually looking for what will change under the new system. I can 
> work this out from a count of those with. Listing a line for every example is 
> good enough for me to do this.
> 
> One complication my hosting , which was chosen 5 years ago before I learned 
> about the wonders of Linux, is on FreeBSD.

I had a look at the online man pages at the FreeBSD website. It looks
as though FreeBSD uses GNU grep, so it shouldn't be too much of a 
complication. 

So you can use grep -r (grep --recursive). Before I knew about this,
I used to do "grep whatever */*" to get just the next level down.
I expect */* will work, too, but if you really have thousands and
thousands of files in directories, the * might be too big for the
shell to expand. (That happened to me once.) So "grep -r whatever ."
or "grep -r whatever name-of-directory" is probably better.

If all you want is a list of the filenames and you don't -need- to 
see the same matching line repeated again and again, there is a  -l 
(--files-with-matches) option to grep, too, which will print the 
names of the files but not the lines containing the pattern.

Oh, and there's a -L to list --files-without-match. So using those
two should give you your results. To get a total, just add wc -l 
(word count, only counting lines) on the end. 

grep -lr getS . | wc -l 
              ^
              |

            If you miss the directory name off, it will just sit there, btw.

Telsa




More information about the Swlug mailing list