As I posted yesterday, I have a massive MySQL dump to import. I tried BigDump, but one of the tables kept producing errors and so BigDump would exit. I don’t need the whole db imported, so I wrote this to split it by table. It produces a new sql file for every table it finds, numbered sequentially so if you process them in alphabetical order it’s the equivalent of the whole dump. USE statements get their own files in the same sequence.
#! /usr/bin/perl
use strict;
use warnings;
use 5.010;
my $dump_file = $ARGV[0];
&usage() if !$dump_file;
say "using ".$dump_file;
my ($line, $table,@query, $file_number,$file_name);
my $line_number = 1;
my $find_count = 0;
open(DUMP_IN, "< $dump_file");
while(){
my $line = $_;
if (/^USE\s.(\w+)./){
say "changing db: ".$1;
$file_name = &make_file_name("USE_$1", "$find_count");
&write_USE($file_name, $line);
$find_count++;
}elsif (/^-- Table structure for table .(.+)./){
## If the current line is the beginning of a table definition
## and @query is defined, then @query must be full of the previous
## table, so we want to process it now:
if (@query){
$file_name = &make_file_name("$table", "$find_count");
open(OUTPUT, ">$file_name");
foreach(@query){
print OUTPUT $_;
}
close OUTPUT;
undef @query;
}
$table = $1;
$find_count++;
}
next unless $table;
push @query, $line;
$line_number++;
}
close DUMP_IN;
say $line_number;
## Subroutines!
sub write_USE() {
my($filename, $line) = @_[0,1];
open (OUTPUT, ">$filename");
print OUTPUT $line;
close OUTPUT;
}
sub make_file_name() {
my ($type, $number) = @_[0,1];
$number = sprintf("%05d", $number);
$file_name=$number."_".$type.".sql";
return $file_name;
}
sub usage() {
say "Error: missing arguments.";
say "Usage:";
say "$0 [MYSQL_DUMP]";
exit 1;
}
A small downside is that this replaces my 2.5Gb file with about 1800 smaller ones. A scripted importer is to follow.