Recovering files
If a file appears to be damaged, first try saving a compacted copy, which copies all the data and rebuilds the tree structure of the database (see
Saving a compacted copy). Even if the file can’t be opened, you can use the Advanced Recovery Options dialog box (described below) to make a compacted copy. If a file is too damaged to open or use, you can use the Recover command to have FileMaker Pro salvage as much information as it can and create a new, recovered file.
Note FileMaker Pro Advanced: Runtime applications do not support advanced file recovery features.
FileMaker Pro displays the “Name new recovered file” dialog box. The original (damaged) filename, followed by
Recovered, displays for
File name (Windows) or
Save As (Mac OS).
4.
|
To have FileMaker Pro use the default file recovery settings (recommended for best results), make sure that Use advanced options is deselected and then skip to the next step.
|
To change the recovery settings, select Use advanced options or click
Specify, set options, then click
OK. (For more information about advanced recovery options, see
Setting advanced file recovery options.)
The Recover.log file displays in a separate window, in tab-delimited format. From left to right the columns show the date, time, and time zone in which the recovery took place, the filename, error number, and description of the recovery event. You can save or print this file for further examination. Then close the window.
After a file is recovered, FileMaker Pro displays status information. What you see depends on the result of the recovery operation and the options that were used. The following table shows all possible results that could be displayed for each database component that can be recovered. (For information about these settings, see
Setting advanced file recovery options.)
In many cases, a successfully recovered database is larger than the original database. This is caused by new disk blocks being allocated as the database is recovered. For example, rebuilding the index field by field and record by record can cause data distribution that is different (and possibly larger) than the original file.