Saturday, January 1, 2011
【 Weak current College 】 how to defend the archiving system
To migrate data from primary to secondary storage will give enterprises bring enormous benefits. Secondary storage reduces to purchase primary storage requirements, extend the backup window, reduces backup infrastructure investment, and to save the data and the regulations lay down a basic obedience to the policy. Ideally, the secondary storage saves only those static data only copy. Therefore, defending the archive layer requires a highly redundant, highly available fragmentary.
If the archive layer only save a copy of data independence, then the layer will not only be a cheap NAS head behind multiple cheap hard drive. There must be specific to secure and durability of the new store design to support the requirements of the fragmented redundancy. Fortunately, the archive is fragmented in the other requirements for scalability and capacity optimization, etc., can be used as part of the archive to defend process.
A number of fragmented can resolve to build a storage node in the cluster to handle size requirements. When you need more storage capacity, you only have to wait to add a node. Fragmented will automatically recognize the new node and start using the new capacity. This node also provides a redundant structure is used to maintain data availability. Even if one or even two nodes (at) shortcomings will not have any data loss or loss of data access.
Secondly, a fragmented cluster storage structure can be used to provide a high level of data protection features. This special priority, given as the drive capacity continues to add, especially now that has entered the mainstream 2TB HDD, traditional RAID5 or RAID6 in piecemeal restore work beginning close to the original terms have international limited.
Traditional RAID fragmented challenge lies in a piecemeal restore to completely stored function and redundancy to defend the shape. If a drive has shortcomings, once identified a global spare parts or exchanged shortcomings drive, mostRAID skills will start a rebuild process. For large-capacity 1TB and 2TB drives, this process can be the reconstruction of several hours or even days of work. In the reconstruction process, archive data-enterprise sector is the only data copy — is completely exposed. If a secondary drive shortcomings, these data will be permanent and lost, because of the reconstruction process in the chance to read the disadvantage has greatly improved. Even a RAID6 also provides a defence, since the reconstruction process is particularly long, so the process before the end of the third drive significant probability of disadvantages added — usually in large drive deployment will reach more drives. Although absolute, that the probability of occurrence is not high, but once there, data is the only copy the missing crisis is 100%. Therefore, we need a more efficient and prevent disadvantages of security essentials.
Archive of fragmented can select from the following two ways of addressing this dilemma: either resolved to mirror, or the use of high levels of RAID. In a small deployment environment, use mirroring more basic essentials. Although the image of the second copy will improve the cost, but it provides redundancy ensures high speed reduction. For most small environments, the archiving of fragmented original size can compensate for mirroring the capacity reduction.
In larger environments that mirror pairs all data of "one-to-one" copy the is unbearable, and this is why people often choose RAID as alternative causes. However, RAID and unable to process the data, because when the crisis of doubt detect incorrect time only basic to rebuild the raid group is not a progressive data incomplete of valid points. There is an alternative essentials, is to use higher levels of RAID. For example, some RAID techniques, data can be split into multiple blocks, and their distribution to a different drive on the storage node. If a node in the event of faults, other data block willbe aggregated together. If the two nodes (now each node contains four drives) occurs, data is still incomplete. This Essentials than RAID parity provides more powerful data security algorithms.
Node and drive fault is very basic find, but even more important is the "unaware" of data loss. In this situation, a drive to be demoted to the practice of disadvantages does not occur, but the point of the data on the drive can be damaged. How to detect this kind of data damage? for traditional fragmented, the confirm data damage only essentials is read in the data flow. If more than one copy of the data in the hard disk and tape, you can choose from one of these equipment on the restored. But this saved data redundant copy of Essentials first makes archiving meaningless and added costs.
Using another well known for our archive of skills to deal with this question can be fragmented. Repeated data deletion is a fragmented can be archived to optimize storage capacity. Repeated data deletion algorithm for each block of data written to generate a signature, and the signature for this block is the only. If the signatures appear again, a second copy of the data will not be written to the hard disk.
Archive of fragmented can use this signature to protect data and optimize space, and use the signing information to validate the data saved on your hard disk. Archive of fragmented periodically for saved data block operation this algorithm, and each operation hour signature should be similar, or you can present some data loss. Because the archive bits from RAIN to defend the policy generated data, so data loss can be "fixed", where the primary information.
When developing a fully optimized archiving policy is that the copy is an important link. Replication provides a second, but subject to the maintenance of data copy. It not only can prevent the occurrence site, and you can all other failure to defend the idea of preventing some data loss. Since this is a single copy of maintenance, so the data retention policy and a copy of the original archive are consistent.
Repeated data removal can also be used to complete a supporting WAN data replication strategy. In the copy, as long as those conditions for disaster reduction site purpose fragmented, the only change data block will be sent, even if the source data is from multiple sites. For example, three sites can be copied to a disaster restoration site. When one of the main site is ready to send recently or add data, if some data that once existed in short-range site, then these data will not be happening. This idea is not only reduces the need to restore the site to save the disaster in the amount of data, and also reduces the required bandwidth of WAN.
In order to enable hard disk archiving completed lower storage and security costs, the user must trust this safe and secure and to save the data, the only copy of the feature. Long-term defence of archived data incomplete copies of these for sure that no loss is particularly important.
Labels:
[:]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment