If you have an unwieldy text file that you are trying to process, splitting it in sections can sometimes help processing time, especially if we were going to import a file into a spreadsheet. Or you might want to just retrieve a particular set of lines from a file. Enter split, wc, tail, cat, and grep. (don't forget sed and awk). Linux contains a rich set of utilities for working with text files on the command line. For our task today we will use split and wc. First we take a look at our log file....

> ls -l
-rw-r--r-- 1 thegeek ggroup 42046520 2006-09-19 11:42 access.log

We see that the file size is 42MB. That's kinda big... but how many lines are we dealing with? If we wanted to import this into Excel, we would need to keep it less than 65k lines. Let's check the amount of lines in the file using the wc utility, which stands for "word count".

> wc -l access.log
146330 access.log

We're way over our limit. We'll need to split this into 3 segments. We'll use the split utility to do this.

> split -l 60000 access.log
> ls -l total 79124
-rw-rw-r-- 1 thegeek ggroup 40465200 2006-09-19 12:00 access.log
-rw-rw-r-- 1 thegeek ggroup 16598163 2006-09-19 12:05 xaa
-rw-rw-r-- 1 thegeek ggroup 16596545 2006-09-19 12:05 xab
-rw-rw-r-- 1 thegeek ggroup 7270492 2006-09-19 12:05 xac

We've now split our text files into 3 seperate files, each containing less than 60000 lines, which seemed like a good number to choose. The last file contains the leftover amount. If you were going to cut this particular file in half, you'd have done this:

> split -l 73165 access.log

And, that's all there is to it.