top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

Master not creating new binary log in MySql

0 votes
573 views

We currently have a problem with a master slave setup running mysql 5.0.

This is one of our legacy servers which are in the planning to be upgraded, however in order for this to be done the replication needs to be up and running.

The problem we have currently however is that the binary logs on the master was moved to a seperate partition due to disc space restrictions.

A new binlog file called mysql-bin.1 was created and everything seemed to work fine.

However, the moment the file reached the file size of 100Mb, it does not go on to create a new binlog file called mysql-bin.2 and the replication fails stating that it is unable to read the binary log file.

Thus far we have done a flush logs and reset master , but the same problem occurs, where it creates mysql-bin.1 and the moment it reaches it's max size and suppose to create a new file, it stops and
does not create the new one.

I really hope this makes sense, and that someone can perhaps point us in the correct direction.

posted Jul 3, 2013 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

What setting(s) did you change to move to the separate partition?

 SHOW VARIABLES LIKE '%bin%';
 SHOW VARIABLES LIKE '%dir%';

(there may be other VARIABLES worth checking) What steps did you take for the move? (Actually move bin.1? Start over? etc?)

Consider using expire_logs_days. I don't think anything relevant has changed during 4.0 thru 5.6.

answer Jul 3, 2013 by anonymous
In short what we did was the following:

 - binary logs was written to a 20GB filesystem and due to company  policies we kept expire logs at 7 days.
 - system got quite busy over the years so the space was becoming a problem and thus we had to move to another directory.
 - setting that was changed is : log_bin =
 - old binary logs were moved to the new directory after shutting down the database
 - database started up and continued as normal, however stopped at the last binary log when it filled up and complained about a corrupted binary log.
 - a flush logs and reset master was done and a new binary log was created mysql-bin.1
 - however same thing happening here, the binlog file fills up to 100Mb as configured, then stops without creating a new binary log.
 - this is then the point where the replication crashes as well.

Output of the queries:

mysql> SHOW VARIABLES LIKE '%bin%';
+---------------------------------+----------------------+
| Variable_name | Value |
+---------------------------------+----------------------+
| binlog_cache_size | 1048576 |
| innodb_locks_unsafe_for_binlog | OFF |
| log_bin | ON |
| log_bin_trust_function_creators | OFF |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 104857600 |
| sync_binlog | 0 |
+---------------------------------+----------------------+

mysql> SHOW VARIABLES LIKE '%dir%';
+----------------------------+----------------------------+
| Variable_name | Value |
+----------------------------+----------------------------+
| basedir | /usr/ |
| character_sets_dir | /usr/share/mysql/charsets/ |
| datadir | /var/lib/mysql/ |
| innodb_data_home_dir | |
| innodb_log_arch_dir | |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 90 |
| plugin_dir | |
| slave_load_tmpdir | /var/lib/mysql/tmp/ |
| tmpdir | /var/lib/mysql/tmp |
+----------------------------+----------------------------+
10 rows in set (0.00 sec)
Similar Questions
0 votes

Given a MASTER and a SLAVE.

When launching the SLAVE, it knows about the binary log file used by the MASTER and the position in that log file.

Say the binary log file (on the master) has reached its maximum size, so that it has to switch to a "+1" binary log file: does he inform the SLAVE of that switch so that the SLAVE updates its information about the MASTER status?

I'm readind the documentation http://dev.mysql.com/doc/refman/5.1/en/binary-log.html and dont see what
is happening while slaving.

+1 vote

Looking for some help configuring 5.0.45 master-slave replication. Here's my scenario...

We have a heavily loaded 30gb 5.0.45 DB we need to replicate via master-slave configuration to a new, beefier server running same mysql 5.0.45, and then cutover to the new server. Due to extreme SAN congestion and a grossly overloaded master server, our DB dumps take 5.5 hours. But we cannot afford that much downtime or locking during the replication transition; we can manage 10-15 minutes, but more is very problematic.

I understand that "FLUSH TABLES WITH READ LOCK" will lock the tables for the duration of the 5.5 hour dump. Is this true?

If so, we'd like to dump/initialize/sync slave WITHOUT any locking anything the master for more than a few seconds if at all possible. Will this give us the dump we need?

 mysqldump --single-transaction --master-data --all-databases
+1 vote

I am writing a web application in perl that will create, edit, update and delete data from a MySQL database. I have written a perl module that will manage the connections (issue database handles ). As new users sign up for the application should each get their own MySQL username and password or is okay to execute their queries with the same (one generic) MySQL username and password?

+1 vote

I've restored an MySQL backup from our MySQL server into another server. The backup includes InnoDB tables. After the import, MySQL recognized the innodb tables fine but when I try to do a check table it returns that the table doesn't exists.

Permission and owner of the table files (.frm files) are ok, since it recognizes MyISAM tables (they have the same permission). Innodb engine is enabled..

Which can cause the tables to appears as "non existent", as far as they do really exist?

0 votes

I am running mysql 5.0 for now, and I have a script that I wrote at 12 am, that stops mysql server, unmounts the disk that has mysql, and takes a different snapshot from amazon as the new disk.

Long story short, 50% of the time the command /etc/init.d/mysqld stop will fail

Stopping MySQL: [FAILED]

Unmounting /opt on dbserver1

I then log on the server and mysql is still running.

When that happens what is the best way to stop mysql? I am trying to write something in my script that will do that, and thinking of using the kill command, but is that the best way?

Or should I be using mysqladmin stop? Is there a better way to bring down mysql server and make sure it is down? Should I be doing something else to ensure it comes down?

...