重置SQL Remote消息
2006-05-08 23:11:38 来源:WEB开发网When should you refer to this document?
You should refer to this document when the following 3 conditions exist:
1) You have lost the log file and mirror log file for a database participating in a SQL Remote replication set up. If you are not using a mirror log file, then loss of the log file is sufficient.
AND
2) You have not been running dbremote with the "-u" switch to "Process only backed up transactions", or if you have been running dbremote with the "-u" switch, you do not have a valid backup available.
3) You are using SQL Remote for Adaptive Server Anywhere
What is the importance of the log file to the SQL Remote message tracking system?
The SQL Remote message tracking system tracks messages based on the log offset from the transaction log, of the sending database, that corresponds to the operations contained in the messages. When a log file is lost, a gap is introduced into the sequence of transaction log offsets. Since the SQL Remote message tracking system relies on the sequence of log offsets being contiguous, the introduction of a gap breaks the system. The log offsets required by the message tracking system are maintained in the sys.sysremoteuser table.
For more detailed information on the SQL Remote message tracking system, please refer to the following section of the documentation:
Replication and Syncronization Guide
PART 3. SQL Remote
CHAPTER 18. SQL Remote Administration
The message tracking system
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrsen7/@Generic__BookView
What are the consequences of a lost log file to data movement?
SQL Remote generates and sends messages based on the contents of the transaction log(s). When a log file is lost, the information required to replicate transactions that had been recorded in that log file is also lost. This means that when you lose a log file there are row values, or transactions, in that database which have not yet been replicated and which do not exist at any other node in the replicating system. Given this, resetting the SQL Remote message tracking system consists of two principle tasks:
1) Reconciliation of the data between the consolidated database and the remote databases
2) Resetting of the log offsets contained in the sys.sysremoteuser table
Reconciliation of the Data
We will consider 3 scenarios under which the requirements to reconcile data differ. These scenarios are:
1) One-way publication of data from the consolidated database down to the remote databases
2) One-way publication of data from the remote databases up to the consolidated database
3) Bi-directional publication between the consolidated and remote databases
One-way publication of data from the consolidated database down to the remote databases
If your replication environment consists entirely of one-way publications moving data down from the consolidated database to the remote databases, then all of the data in the system will be contained in the consolidated database. In this scenario, the remote databases can be re-extracted without risk of data loss. For guidelines on how best to perform a re-extraction of an existing remote database, please refer to "Appendix A - Recommendations for Re-extracting Remote Databases" in this document.
One-way publication of data from the remote database up to the consolidated database
In this scenario, data exists on the remote sites which does not exist on the consolidated site. Re-extraction of a database without first reconciling the data would result in the loss of any data that had not yet replicated. Since the data flow is from the remote database up to the consolidated database, the version of the data on the remote database can be taken as correct.
One possible technique for reconciling the data is presented in "Appendix B - Data Reconciliation Using Proxy Tables".
Bi-directional publication between the consolidated and remote databases
This scenario is very similar to the one-way publication in which data moves from the remotes up to the consolidated database. However, this is a more complicated situation since it is possible for a given row to exist on both the consolidated and the remote database but with different values for some columns. Since the data flow is in both directions, the values neither at the consolidated nor at the remote can arbitrarily be considered to be correct. The data owners will be required to make a decision as to the correct values of the data. The business rules that are implemented in the conflict resolution triggers of the consolidated database can be used as a guideline for deciding which version of the data should be considered correct.
For more information on how SQL Remote would handle conflicts during normal operation, please refer to the following section of the documentation:
Replication and Syncronization Guide
PART 3. SQL Remote
CHAPTER 15. SQL Remote Design for Adaptive Server Anywhere
Managing conflicts
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrsen7/@Generic__BookView
·Note that the conflict resolution triggers themselves will not fire during the reconciliation process. Their value to the reconciliation process is as documentation of the business rules for resolving conflicts.
One possible technique for reconciling the data is presented in "Appendix B - Data Reconciliation Using Proxy Tables".
Resetting of the log offsets
You can reset the SQL Remote message tracking system either by re-extracting a given remote database or by manually issuing a REMOTE RESET command. For guidelines on how best to perform a re-extraction of an existing remote database, please refer to "Appendix A - Recommendations for Re-extracting Remote Databases" in this document.
The REMOTE RESET command reinitializes the values in the sys.sysremoteuser table for the user specified in the command. If you choose to use this command, then it must be executed at both the consolidated and the remote databases. The next time you run dbremote after you have executed the REMOTE RESET command, new instances of the "0" message will be generated and exchanged between the remote and the consolidated database. For more information on the "0" message, please refer to "Appendix A - Recommendations for Re-extracting Remote Databases" in this document.
It is important to note that the REMOTE RESET command does not recover data. Rather, it introduces a gap in the data by forcing dbremote to ignore all transactions in the log file prior to point at which the REMOTE RESET command was issued. Issuing the REMOTE RESET command is a safe and effective method of resetting the message tracking system without having to re-extract a remote database when:
1) the databases involved are completely up to date
AND
2) the sections of the log files, at both ends, that will be skipped do not contain transactions to be replicated
On the other hand, if you are not satisfied that the databases involved are up to date, or if you know that the databases are definitely out of synch, then issuing the REMOTE RESET command will increase rather than decrease the gap in data between the consolidated and remote databases. Even in this situation, you may choose to issue the REMOTE RESET commands to reset the message tracking system and allow transactions to be replicated while a new database is being extracted and deployed.
For more information on the REMOTE RESET command, please refer to the following section of the documentation:
Replication and Syncronization Guide
PART 3. Reference
CHAPTER 25. Command Reference for Adaptive Server Anywhere
REMOTE RESET statement
URL:
http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrsen7/@Generic__BookView
Appendix A - Recommendations for Re-extracting Remote Databases
The Prima(最完善的虚拟主机管理系统)ry difference between the re-extraction of an existing remote database and the extraction of a new remote database is that messages may currently be in transit to the consolidated database in the event of a re-extraction. Any messages that are in transit likely contain valid data. In order to ensure that the most data possible is recovered, the re-extraction process should ensure that valid messages are not rejected as a result of the re-extract.
The 2 basic techniques for ensuring that all valid messages are received and applied are:
1) Wait out the message transport layers latency period
This option is suitable when it is known that messages are typically received within a given time period. As an example, file based messages are present in the inbox of the receiving end as soon as dbremote completes sending them. In contrast, MAPI or SMTP based messages will have some latency as the mail servers forward them from the sending node to the receiving node.
To wait out the latency period, the following process would be used:
i) Run dbremote for the last time on the remote database that is about to be re-extracted
ii) Wait until the latency period has expired
iii) Run dbremote for the last time on the consolidated database to apply any messages that were sent from the remote
iv) Re-extract the remote database at this point
*Note that since the point of the re-extraction was the loss of a log file, step (i) will not be applicable if the log file was lost on the remote database
2) Extract the new remote database under a different remote user id.
This option is suitable when the latency period is not known, or if the remote database is going to continue to generate messages while the replacement database is re-extracted and deployed to the remote site. Under this option, a new remote user will be created with the same subscriptions as the existing remote user. Typically the 'generation' of this new remote user would be reflected in the remote user id. Since this approach does not overwrite the information in the sys.sysremoteuser table for the existing remote user id, messages from that user can continue to be received and applied.
It is important to remember that in this scenario messages may also be in transit from the consolidated database to a remote database, the data represented by transactions contained in those messages will already exist at the consolidated site and will be included in the remote database that gets extracted.
A typical implementation of this process would be:
i) For an existing remote user (i.e. "user1", create a new remote user with a higher generation id reflected in its name (i.e. "user1a")
-this new remote user will require its own unique message address
-you will need to create subscriptions for the new user that correspond to the subscriptions for the existing user
ii) Extract and deploy a database for the new remote user ("user1a")
iii) Once the new database has been received and is in use at the remote site, wait out the latency period and then drop the old remote user ("user1")
The "0-0000000-00" Message as a Special Case
The first message that is sent from each of the consolidated and the new remote database is the "0" message. This message contains information that is required to initialize the sys.sysremoteuser table with the log offsets required by the message tracking system. Since this message is a special case, any instance of this message can be applied by any newly extracted database that has not previously applied a message of this type. Given the following series of events, the special case "0" message could be incorrectly applied to a newly extracted database:
1) Extract a remote database using the extraction utility
2) Run dbremote to send the "0" message
3) Extract the same remote database a second time while the first instance of the "0" message is still sitting in that remotes inbox
4) Run dbremote and apply the first instance of the "0" message.
- this message is now out of context since it contains log offset information from prior to the second extraction of this database
- the sys.sysremoteuser table of this remote database will now contain incorrect log offset information
- running dbremote to apply further messages to this remote database will likely result in the reporting of the error "Missing message ..."
*Note: not all instances of this error/warning are due to this behavior. In many cases, the "Missing message ..." error is a sign that the SQL Remote message tracking system is working as expected.
Due to this behavior, it is strongly recommended that the inbox for a remote database be emptied out before the new database is deployed. Two techniques for ensuring that an incorrect "0" message does not get applied to a newly extracted remote database are:
1) Exchange the first set of messages at the consolidated site before deploying the new remote database.
- it is possible to change the message type of a database after it is extracted
- this means that you can use the file message type to perform a full cycle of replication after extracting the remote database and before deploying it
- a full cycle of replication is:
a)dbremote on the consolidated to generate the "0" message
b)dbremote on the remote to receive and apply the "0" message from the consolidated, and generate its own "0" message to send to the consolidated
c)dbremote on the consolidated to receive and apply the "0" message from the remote
2) Perform an initial run of dbremote with the "-r", "-a" and "-b" switches after deploying the new remote database.
- the "-r" and "-a" switches tell dbremote to receive but not apply any messages currently waiting for the database
- the "-b" switch tells dbremote/sssremote to run in batch mode
- performing a single iteration of dbremote with these switches will have the effect of cleaning out the message directory
- the negative side effect of this approach is that it will likely force a resend cycle
For more information on the dbremote switches mentioned above, please refer to the following section of the documentation:
Replication and Syncronization Guide
PART 3. Reference
CHAPTER 22. Utilities and Options Reference
The Message Agent
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrsen7/@Generic__BookView
The Extraction Utility
For instructions on using the extraction utility, please refer to the following section of the documentation:
Replication and Syncronization Guide
PART 3. SQL Remote Administration
CHAPTER 17. Deploying and Synchronizing Databases
Using the extraction utility
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrsen7/@Generic__BookView
Appendix B - Data Reconciliation Using Proxy Tables
The purpose of reconciling data is to provide a fixed point in time at which the data content in a consolidated database agrees with the data content in a remote database. Once this fixed point in time is defined, replication can be initiated. The choice of a reconciliation method needs to be made considering the completeness of the reconciliation and the effort that is required to complete the process. One possible approach to reconciling the data is to use proxy tables. It should be emphasized that this is not the only possible technique for reconciling data between replicated databases. Any procedure that creates a consistent image of the data, on both the consolidated and remote database(s), is valid.
In order to reconcile the data using proxy tables, you must have a usable copy of the databases from both the remote and consolidated nodes. This technique does not require that you have a copy of the log file from either node. The drawback of using this technique is that you need to reconcile inserts, updates, and deletes independently. You will also require the involvement of the data owner(s) to determine which rows and/or column values should be considered correct.
A proxy table allows a table in a separate database to be viewed and manipulated as if it was part of the database you were currently connected to. By defining proxy tables to connect your copy of the remote database to the consolidated database, you can use SQL statements to compare the rows that in remote and consolidated databases.
For more information on proxy tables, please refer to the following section of the documentation:
ASA User's Guide
PART 5. Database Administration and Advanced Use
CHAPTER 29. Accessing Remote Data
Basic concepts
Remote table mappings
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbugen7/@Generic__BookView
Identifying Rows to be Reconciled
Once you have defined the proxy tables linking the two databases, you can perform select statements to compare the Prima(最完善的虚拟主机管理系统)ry keys of rows in the remote database to the rows in the consolidated database. Consider the following example:
Consolidated Database
Table_1 |
Row_ID Row_Text |
1 Value1 |
2 Value2_cons |
4 Value4 |
Remote Database
Table_1 |
Row_ID Row_Text |
1 Value1 |
2 Value2_remote |
3 Value3 |
If Table_1 in the Remote Database is configured, on the consolidated database, as a proxy table with the name Proxy_Table_1, then a select statement to identify the rows that exist in the remote database but not the consolidated database would be:
SELECT PT1.Row_ID FROM Proxy_Table_1 PT1
WHERE PT1.Row_ID NOT IN (SELECT T1.Row_ID FROM Table_1 T1);
A similar select statement to identify the rows that exist in the consolidated database but not the remote database would be:
SELECT T1.Row_ID FROM Table_1 T1
WHERE T1.Row_ID NOT IN (SELECT PT1.Row_ID FROM Proxy_Table_1 PT1);
Select statements such as the two shown above allow you do identify rows which exist at one but not both of the nodes that you are reconciling. This provides you with the basic information that you need to reconcile inserts and deletes between the two nodes. The final set of rows that you need to identify is the set of rows which have been updated at one node or other and for which the updates had not yet replicated. Using the data in the sample tables above a select statement to determine which rows require reconciliation of updates would be:
SELECT T1.Row_ID, T1.Row_Text AS cons_row_text, PT1.Row_Text AS remote_row_text
FROM Table_1 T1, Proxy_Table_1 PT1
WHERE T1.Row_ID = PT1.Row_ID
AND T1.Row_Text != PT1.Row_Text;
*Note that in practice, you will have to include restriction criteria based on your publication and subscription definitions when performing the comparisons. In the above example, the publication specified the entire table for simplicity.
At this point you are able to identify all of the rows that require some form of reconciliation. You still need to determine the correct action to be taken with each row. If a row exists at the remote node and the not the consolidated node then there are 2 possible scenarios:
1) The row was recently inserted at the remote node and the insert has not yet replicated up to the consolidated node.
OR
2) The row was recently deleted at the consolidated node and the delete has not yet replicated down to the remote node.
Similarly, there is no arbitrarily correct way to reconcile updates.
The correct reconciliation will depend on the business rules of your SQL Remote set up. When reconciling inserts and deletes it is important to know at which nodes users are allowed to insert and delete rows from each table. One common scenario would be for rows in specific tables to be inserted only at the consolidated node or only at the remote nodes. If the business rules governing data manipulation and movement are understood, then you will be able to determine which differences represent inserts as opposed to updates.
One source of information on the existing business rules for handling updates is your conflict resolution triggers. Conflict resolution triggers only fire when update conflicts are detected while SQL Remote is applying updates. They will not automatically resolve conflicts in a recovery situation. Keep in mind that if you have not defined a conflict resolution trigger then the default behavior is that the last operation to be executed on the consolidated database "wins". The translation of this behavior is that if you have not defined a conflict resolution trigger, then you do not have a preference as to which version of a row is kept.
For more information on how SQL Remote would handle conflicts during normal operation, please refer to the following section of the documentation:
Replication and Syncronization Guide
PART 3. Replication Design for SQL Remote
CHAPTER 15. SQL Remote Design for Adaptive Server Anywhere
Managing conflicts
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrsen7/@Generic__BookView
Applying the Inserts/Updates/Deletes
Once you have identified the rows which need to be moved or modified, you can execute SQL statements involving the proxy tables to modify the rows. When performing the required inserts, Syntax 2 of the INSERT statement allows an insert statement to use a SELECT subquery to generate the rows to be inserted.
For more information on Syntax 2 of the INSERT statement, please refer to the following section of the documentation:
ASA Reference Manual
CHAPTER 9. SQL Statements
INSERT statement
URL: http://manuals.sybase.com:80/onlinebooks/group-sas/awg0700e/dbrfen7/@Generic__BookView
Glossary
One-way Publication
-a one-way publication is a publication which exists at either a consolidated or a remote node in a replicating system
-since the publication only exists on one side, data moves in one direction only
Bi-directional Publication
-a bi-directional publication is a publication which exists at both the consolidated and remote node(s) along with corresponding subscriptions
-since the publication exists on both sides, data moves in both directions
-this is the default behavior when you use the Extraction utility to generate your remote databases
- ››SQL Server 2008 R2 下如何清理数据库日志文件
- ››sqlite 存取中文的解决方法
- ››SQL2005、2008、2000 清空删除日志
- ››SQL Server 2005和SQL Server 2000数据的相互导入...
- ››sql server 2008 在安装了活动目录以后无法启动服...
- ››sqlserver 每30分自动生成一次
- ››sqlite 数据库 对 BOOL型 数据的插入处理正确用法...
- ››sql server自动生成批量执行SQL脚本的批处理
- ››sql server 2008亿万数据性能优化
- ››SQL Server 2008清空数据库日志方法
- ››sqlserver安装和简单的使用
- ››SQL Sever 2008 R2 数据库管理
更多精彩
赞助商链接