top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

MySQL statement based replication with LOAD DATA LOCAL INFILE statement

+3 votes
520 views

I am trying to understand MySQL statement based replication with LOAD DATA LOCAL INFILE statement'.

According to manual -
https://dev.mysql.com/doc/refman/5.0/en/replication-features-load.html -
LOAD DATA LOCAL INFILE is replicated as LOAD DATA LOCAL INFILE, however, I am seeing it replicated as 'LOAD DATA INFILE'.

I was wondering why slave isn't getting LOAD DATA LOCAL INFILE statements. Master is using MySQL 5.0 and slave is using MySQL 5.5

posted Jan 30, 2014 by Kumar Mitrasen

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button
Check statement-based-replication and row-based-replication. I think that this could help you.

1 Answer

+1 vote

The slave is not receiving the file from your local disk. When that file arrives at the master (due to your LOAD DATA LOCAL ..) it is stored in the binary log and copied (via replication) to the slave where the slave performs a server-side LOAD DATA... . This is how STATEMENT-based replication operates.

answer Jan 30, 2014 by Naveena Garg
Similar Questions
+2 votes

How I 'll check how many rows inserted into every second on an average in a table of MySQL database?

+1 vote

We have replication set-up, where we cater to huge amounts of data. Since quite some time, we have been facing issues wherein the slave lags behind master quite a lot.

So, yesterday we were able to setup parallel replication, by incorporating the following changes ::

a) To begin with, partitioned some tables into dedicated databases.

b) Set up the "slave-parallel-workers" parameter.

The above seems to work functionally fine, but we have one doubt/query about the scalability of this solution.

First, I will jot down the flow as far as I understand (please correct if wrong) ::

"Even in parallel-replication scenario, the master writes all the binlog (combined for all databases) in just one file, which then gets passed onto the slave as single-file itself. Thereafter, all the replication commands (combined for all databases) are written sequentially onto one slave-relay file.

Thereafter, as per the documentation, the slave-SQL-Thread acts as the manager, handing over commands to worker-threads depending upon the databases on which the commands run."

So far, so good. However, what would happen if the slave-relay file contains the following ::

db1-statement-1 (short-running)
db2-statement-1 (very, very long-running)
db2-statement-2 (short-running)
db1-statement-2 (short-running)
db1-statement-3 (short-running)

We will be grateful if someone could please clarify, as to how the above statements will be managed among the Manager and the Worker-Threads (let's say there is just one worker-thread-per-db) ?

In particular, does the Manager thread creates internal slave-relay-log-files, one for per database-statements?

+1 vote

Looking for some help configuring 5.0.45 master-slave replication. Here's my scenario...

We have a heavily loaded 30gb 5.0.45 DB we need to replicate via master-slave configuration to a new, beefier server running same mysql 5.0.45, and then cutover to the new server. Due to extreme SAN congestion and a grossly overloaded master server, our DB dumps take 5.5 hours. But we cannot afford that much downtime or locking during the replication transition; we can manage 10-15 minutes, but more is very problematic.

I understand that "FLUSH TABLES WITH READ LOCK" will lock the tables for the duration of the 5.5 hour dump. Is this true?

If so, we'd like to dump/initialize/sync slave WITHOUT any locking anything the master for more than a few seconds if at all possible. Will this give us the dump we need?

 mysqldump --single-transaction --master-data --all-databases
0 votes

I have been asked to set up multiple database replication which I have done before for simple cases however there are some nuances with this instance that add some complexity and I'd like to hear your collective expertise on this proposed scenario:-

1) Single master database
2) n (probably 3 to start with) number of slave databases
3) All but 5 tables (123 tables in total) are to be replicated from the master to all the slaves
4) 3 tables from the slaves are to be replicated back to the master

It is mainly item 4) that concerns me - the primary ID's are almost certain to collide unless I seed the auto increment ID to partition the IDs into separate ranges or does MySQL handle this issue?
There are some foreign keys on one of the 3 slave to master tables but they are pointing at some extremely static tables that are very unlikely to change. Is the above a feasible implementation...?

...