Tuesday, January 31, 2012

【 Weak current college 】 technical basis on standby



Hot standby in long-term use has a lot of users have summarized many of the skills experience, there are some concepts are more we need to know about, the following is the details of hot standby technology need to understand the concept.

Hot standby technology needs to be pointed out a few concepts:

1. standby technology works

Fault isolation, simple, high-availability (hot) is a kind of the point of failure to protect the transfer of business continuity. Its business recovery is not in the original server, which is in the standby server. Hot does not have the repair the failed server's features, but only fault isolation.

2. standby mode technical Active-Active

Active-Active way refers to the business rather than the server status, if the same application is not complete Active-Active manner. For example, hot two servers is sqlserver database, it is referring to a different database instance. The same database instance is not possible in hot this level Active-Active manner. Simple Active-Active is two way run to Active-Standby on both servers.

3. Standby failure detection

Fault detection is hot standby technology tasks, different double machine detection point number determines the hot standby software in functionality and performance of good and bad, not all software will have the same detection capabilities to PlusWell camputer technology software, for example, it provides a full system detection capability, that is, there are system-level detection, application-level, network level three. System-level testing through hot standby technology software provides a system for the detection of cardiac function, the application level provides the user applications, databases, and other detection feature, the network level detection provides network connection detection and optional on network path detection capabilities, so called full fault detection capabilities.

4, server resources

Hot standby resource refers to a business operation relies on the smallest of the associated services, different two-machine software provides resources, how many different, of course, you can toggle the resources available, the more software applications, also in hot standby technology in server resources mainly include, switch network IP resource, computer name, disk volume resource, the server process.

5, hot standby switch technology

-Standby switch generally consists of manually switch and failover, which is planned to switch (i.e. people switch) and non-planned switch (failover).

Need we note that not all resources have to switch to PlusWell hot backup software as an example he provides:

(1) local resource monitoring, that is not switching resources.

(2) ordinary resource, that is, you can switch the main machine resources.

(3) quick resources refers to quickly switch resources

In the General case of dual switching time is 1-5 minutes, and quickly switch in time is 3-5 seconds. Users should be according to their needs and operational features to select relevant switching service, from the price-cost point of view, the switching time that costs are also higher.

6. standby technology copy and backup concept is the difference between

Hot Backup refers to: highavailable is highly available, while the backup is a Backup, data backup, this is two different concepts, products are two functions on an entirely different product. Hot backup main guarantees the continuity of the business, realization of methods is the point of failure and backup of the transfer, the primary objective is to prevent data loss, do a test Bay, so backup is data recovery instead of application failover.




Monday, January 30, 2012

【 Weak current College 】 standby service reorganized necessity



Hot standby service is a lot of enterprise is in use or is starting to grow old with technology. As the technology development we have a certain experience. Long time of use, which is enhanced by our understanding of the technology. Hot standby service concept includes two broad and narrow.

Broadly speaking, it is important, use two servers, each backup to perform the same service. When a server fails, you can from another server, bears the service task, which does not require human intervention, automatically ensure that the system can continue to provide services.

Hot standby generally require shared storage devices. But in some cases you can also use two separate server.

Implement hot standby service needs through professional cluster software or computer software.

In a narrow sense, hot standby service especially based on active/standby mode Server hot. Server data includes database data simultaneously to two or more servers, or use a shared storage device. At the same time, only one server running. When running a server fails to start, another backup server software consultation survey (typically by heartbeat Diagnostics) to activate the standby machine, ensure that application in a short time to completely back to normal use.

Why do hot standby? hot standby service is the server's fault.

Server failure could be caused by a variety of reasons, such as equipment failure or operating system failure, software, system failures, and so on. Generally speaking, the technical personnel at the scene, the recovery server normally might need 10 minutes, hours or even days. From practical experience, unless you simply restart server (with possible risks still exist), or they often take several hours. But if the technician is not in the scene, the recovery service time is longer.

But for some important system, the user is very difficult to put up with such a long interruption in services. Therefore, you need to pass the hot standby service, to avoid prolonged interruption in services, ensure the system is long-term, reliable service.

Decide whether to use standby service, the correct approach is to examine the importance of the system and the degree of service interruption, to determine whether to use a hot standby. That is, your users can tolerate long recovery service, if the service cannot be restored will cause many influences.

In considering the hot standby service, you need to note that general sense of the hot standby will have a switch, the switch process may be a minute or so. During the switching, service is possible short outage. However, when the switch is completed, the service will resume normal. Therefore, the hot standby service is seamless, uninterrupted, but it can guarantee that in the event of a system failure, the ability to quickly restore normal service, the business will not be affected. But if you do not have a hot standby, in the event of a server failure, there may be several hours of service interruption, the business impact can be very serious.

Another point that needs to be stressed that the server failure and switches, storage devices, the concept is much higher. Because the server is compared to the switch, storage devices, much more complex equipment, but also includes hardware, including operating system, application software system of complex systems. Not only the equipment failure could cause service disruption, and software issues may also cause server not working properly.

It should also be noted that some other protective measures such as a disk array (RAID), data backup is very important, but it cannot replace the role of standby service.

Sunday, January 29, 2012

【 Weak current College 】 hot backup mode in fault-tolerant server application---Power By 【 China power house network 】



Hot backup method of determining is a business to own data protection of important links, a good way to data recovery rate and enterprise business sustainability. The hot-standby mode using two configuration identical server system, in fact, server cluster scenario of fault-tolerant technology is also a multi server fault-tolerant technologies.

While this section is to describe a single fault-tolerant technology is a server for high performance, fault-tolerant, its tolerance to far more than the server cluster and hot-standby mode fault tolerance to high, and therefore more suitable for those such as securities, telecommunications, financial, medical, etc. on fault tolerance special demanding industry.

Previous cluster system in the event of a failure, you need to interrupt the operation of the server, and then use the time to switch to the standby server to run above in order to carry out the repair and recovery, including the cost and loss is the user's wish to see. A fault-tolerant technology for fault-tolerant servers, the biggest advantage is that it can automatically isolate fault module, without disrupting the running module replacement for damaged parts for maintenance, and all physical fault elimination, the system automatically resynchronize run, thereby effectively solves the customer's extra worries. It is precisely for this reason, a fault-tolerant technology for fault tolerant server, is the impact of the rise of the previous years standby mode and cluster technology, more and more people are concerned about. Also, more importantly it can use industry-standard components of the server implementation (IA), competitive cost advantage that makes impressive fault tolerant servers.

Fault-tolerant server is through the CPU clock frequency, the system all hardware backup, including CPU, memory, and i/o bus, redundant backup; through system all redundant parts of synchronization run, realize the true meaning of fault tolerance. Any part of a system failure will not result in system failure and data loss. There are many fault tolerant system is based on IA Server, fully compatible with Windows2000, previously only on RISC systems, fault-tolerant can be achieved. This fault-tolerant technology in IA Server implementation, IA server reliability to 99.999 per cent, while the server is running is uninterrupted.

Standby mode and fault-tolerant servers positioned slightly different, this is determined by the difference between realization of availability. Hot backup mode in General can implement 99.9% availability, fault-tolerant servers can achieve 99.999% availability. In this way, most of the hot-standby mode applications in business continuity is not very strict industry, such as public security system, forces system or individual manufacturers, these industry allows data to be interrupted for a short period of time. While such as telecommunications, banking, securities and health care, and other demanding industries is a fault-tolerant servers all over the world. Also note that hot backup and server cluster does not like the hot backup usually require two way server configuration, and the server cluster is not strictly required, it is confusing to many readers.

In addition, because of the hot-standby mode requires at least 2 servers, resulting in the purchase of software (operating systems, middleware, dual backup software, etc), software maintenance, upgrades, system hardware upgrade needs more than a single fault-tolerant way 1 x additional inputs, and two-computer backup software failure, the maintenance of a high degree of difficulty, the customer will bring greater difficulties. Therefore, although the single fault-tolerant server hardware costs more than double machine backup mode of hardware input, while its total cost of ownership (TCO) is far below the double machine backup costs. But its flexible configuration, the hot backup mode more advantages, many hot backup scenarios are some systems integrators combine different manufacturers server products to meet different customer needs. But overall, fault-tolerant server is the future trend of development.




Saturday, January 28, 2012

【 Weak current College 】 camputer FAQs



Standby is the hub server will be installed to the backup of two servers, and at the same time, only one server running. Here we compare the concern to make the answer, I hope will be helpful to everyone.

Q: can explain what is a hot standby?

A: the so-called hot standby, is the hub server will be installed to the backup of two servers, and at the same time, only one server running. When running a server fails to start, another backup server will quickly start automatically and run (generally 2 minutes), thus ensuring the whole network uptime! hot standby works in practice is for the entire network System Center Server provides a failback capabilities.

Q: what time need hot standby?

A: this question is actually quite simple, General server to a life-long work, their work is absolutely not backed up. Therefore, the decision to use standby, I think it should first of all, the importance of the system, and end users on the service interruption for consideration of the degree of tolerance, and then to decide whether to use a hot standby. For example, network users up to tolerate long recovery service if the service cannot be quickly restored will lead to what kind of consequences, and so on.

Q: have RAID technology and data backup technology, it is also necessary to make hot standby?

Answer: this is not a clear distinction between RAID and backup all important data backup can only be solved! system problems after recovery; and RAID technology to the author's experience and can only solve the problem of the hard disk. We know that there is a problem when the server itself, regardless of the device's hardware or software system that will result in service disruption, and the RAID and backup technology just does not resolve the problems of avoiding disruption in service. Therefore, for high security needs, continuous and reliable application service network system, standby or very important. In fact, we can think of it this way: If your server is broken, you take the time to restore it to work properly so that you can understand the importance of hot standby!

Q: hot standby and differences between clusters?

A: conceptually, hot standby belong to a cluster. Clusters generally includes two categories: one category is pure application server cluster, that is, each application server has access to a unified database server, but he does not need some file-sharing, storage, etc., this cluster is relatively simple. The other type is a database server of the standby, standby, typically two servers at the same time using a shared storage device, and in General, take home, along the way (there is also a high-end system with the parallel manner, that is, two servers at the same time provide services).

Q: how to use the database services hot standby?

A: through software way hot standby. That is, do not use shared storage device, which is the native data directly in the flow between multiple hosts. Obviously, this way the biggest advantage is that saving expensive storage equipment investment, but its disadvantage is not difficult to find: produces data inconsistency, or affect the rate of database read.

We see this as an example: If a service disruption to switch to a backup server, you may have a small amount of already host completed transactions in preparation machine is not yet implemented. With the backup data recovery, backup machine startup, and subsequent actions already undertaken, the lost packet to find it quite difficult. In this way, the applicable for lost a small amount of data is not very sensitive system.

Here mention standard solution, which is based on a shared storage device and a two-computer software for hot standby. It can unattended cases provide fast switch, and no data loss, and purchase storage equipment investment is also relatively high.

Q: how to select and implement the hot standby configuration scheme?

A: 1. application-driven, careful analysis. For high availability.

2. consider using hot standby database or application server cluster, or backup software.

3. Select the identification of specific equipment, software, models, etc. The author here remind everyone that different software or hard drives, and other storage devices, there is a compatibility between them, so before you buy should consult professional, do not appear have dual machine software procurement related storage device is not compatible with such phenomena exist.

4. in the implementation is complete, be sure to test it to ensure that works fine, but it should be noted that in the course of regular running on the system is able to test the normal switch.


Friday, January 27, 2012

【 Weak current College 】 disaster backup system effectiveness of skills



As in recent years various industry information technology development, as well as the information system of the national economy became more important position and influence, the security of information systems running enterprise decision-makers are increasingly becoming the focus of attention. In particular, domestic and international of a series of security incidents and natural disasters on the frequent reality so that people can no longer be the lifeline for enterprises themselves---data and continuously operating capacity.

Information system security threats to natural causes "maintaining" efforts under threat, and thereafter will be transmitted to the external and community, information system for calamities caused serious consequences is not apparent. Building an effective protection against various disaster threats to information systems for social harmony and stability and to avoid group event will be a success in future initiatives.

However, often backfires, when many people get started and even spent a huge cost to complete the goal of disaster prevention system construction, but there was no "crash", prepared for a disaster is not immune to which their investment loss and social impacts are difficult to estimate.

Recently, the domestic similar examples of typical, such as some enterprises in the construction of disaster recovery system, and even passed the national departments of the internal control standards "and" emergency "walkthrough, still had unexpected consequences of disasters, often a small failure can lead to long hours of business to a halt, and for emergency response system construction standards have had questions, not to talk about the launch of the disaster backup center business, that seems to be running into a lot of people, the problem?

There, we need for disaster prevention in the construction of the policy makers said, this is the effectiveness of the disaster backup construction yangbingqianri Lily 1. lack of validity of the consequences of the disaster backup system, not only difficult to guarantee it run and disaster prevention objectives, its impact to your business systems is beginning to produce side effects.

In recent years, many industry disaster recovery system planners were asked to author, how to establish a disaster recovery system? end is important disaster backup technology routes or disaster recovery processes tightly organized important? how to choose proper disaster recovery technology routes? Dr is a simple construction of building an offsite disaster backup Center?

First, the effectiveness of the disaster backup system involves the construction of the actual target of disaster prevention and in line with the objectives of the disaster recovery technology routes, secondly, the knowledge of the effectiveness of the disaster backup system must realize people a deeper reason: the disaster recovery system requirements for disaster prevention full, not only preventing small probability of natural disasters, but also to prevent the probability of equipment failure and logical failure, tight multi-faceted protection network is the only way to win. Specifically, the construction of a disaster never fail, you need to get defensive system to carry out the following links:

1) the objectives of the defensive disaster to refine

Not only General said disaster prevention, and for a variety of devices and networks are prone to accidents, and even considered missing pertinent backup system would have done this kind of defensive target, this can only continue to walk the many traditional failure of disaster prevention in the construction of the old. Early in some traditional disaster recovery system, which does indeed only disaster emergency mechanisms, but does not include the frequent failures, this defensive target itself has resulted in many of our construction financing of disaster prevention in waste, causing the reconstruction must be pushed to the disaster recovery system, or to invest in more advanced protection system.

In the information system security incidents, the more easily lead to disaster recovery system on the part of accidents often: database system crash fails to run, the data file is damaged or lost, storage device failure, etc., in our country, have multiple enterprise imparted the destructive force of such failure.

2) construction of hierarchical recovery system

In the past, people often believe that building an offsite disaster recovery center is what are offsite running or recovery. This really is a big mistake. People must know, offsite disaster backup center, its construction purposes can only be a great flood defence, enabling offsite disaster backup center, not only through rigorous business continuity audit process (for example, early warning mechanisms and declared mechanism), and you want to spend a lot of the disaster backup center personnel, data recovery to the production center is complex and lengthy process.

But part of the disaster backup technology routes also cannot guarantee that the start of the application system data consistency and integrity, which is a lot of complete construction of enterprises generally do not start the disaster backup center or fear of the disaster backup Center initiated important reasons. While various fault-prone (such as logical failure, equipment failure), if their recovery depends on the launch of the disaster backup center and repair, undoubtedly is risk amplification, airing, recovery of the fundamental effect is unpredictable. This is the failure occurred, the disaster recovery system as one of the many factors.

What fundamental solution lies in the boot advanced disaster recovery technology line, complete the level of recovery system, equipment failure (including logical failure) local repair, system startup this off-site disaster 14-character.

At present, some advanced disaster recovery technologies (flies Kang's continuous data protection disaster backup technology) have instant local repair ability so that you can in a very short period of time to complete the emergency response equipment failure (General or even to a few seconds). For a business system, business continuity assurance defuse internal failures may result in significant social risk and economic risk (for example, a large number of claims, etc.)

3) repair technology with the first validation technology

In the past, it is recognized by the repairing technique often "huicun" technology, is to backup data medium rewind production system, and then wait for the recovery of the start-up of the effectiveness and business, this kind of technology there are many risks, first during the long data before restoring, completely unexpected recovery time and recovery reliability. Second, once the recovery is successful, only to find that the recovered data is not one of your own, or requires data that does not exist, it is completely impossible to roll back to the initial state, the system will go into more serious non-Control state. In this type of technology, business continuity is not in the account, people looking to back has been in high spirits, where you can expect instant business continues to run?

At present, many industry leaders have to extend to verifying that the disaster recovery technologies, such technology characteristics is that in the event of data corruption, you can immediately provide the original format of the data point for validation, business system immediately run, ensuring business continuity in the paramount goal, after that, you can use the remaining idle sessions, further repair equipment. This system is referred to as "the first production, after repair".

4) focus on the study of transmission bandwidth

Offsite disaster recovery of an important area is the transmission bandwidth technology, this technology can lead to less often due to lack of data bandwidth delay is too large, causing the disaster backup center data is unavailable, and so on. Many disaster backup technology routes are in transmission bandwidth of streamline technologies excellently inner competition ability, this is very correct. Valid leaner bandwidth transmission technology to make the construction cost of disaster prevention in significantly reduced, or you can make the disaster backup center data real-time performance greatly improved, for data recovery from production centres will also be very effective speed boost.

From disaster recovery technologies mentioned, route choice and detail and perfection of construction object, is the disaster recovery system for construction of two chips, directly affects the final result of the construction of disaster recovery, not weeks and defects will lead directly to disaster or failure business system of the heart stops for a long time (omission), causing the crowd of more questions not already established a backup system? not already through the emergency plan of the internal audit?

This course will initially questioned those terribly embarrassed. Therefore, choose to have the advanced nature and effectiveness of the disaster recovery technology routes (such as aforesaid fly Kang continuous data protection technology, etc.), will be able to lead people to overcome the past for a disaster and the fear of failure, increase people's confidence in disaster for the people's heart scare will be gone, the information system will also be heading for safe and smooth.



Thursday, January 26, 2012

【 Weak current College 】 SSD disk fragmentation of understanding



Internal SSD hard drives are conducted in accordance with the BLOCK to zoning, a BLOCK has four PAGE, a PAGE of capacity is generally 4KB. If you treat it as dormitory, then probably this is the case, a dormitory with several layers (BLOCK), each floor has 4 bedrooms (PAGE), each room to live 4 students (one equivalent of 1KB).

Under normal circumstances, each hostel must be occupied by four students, then open up a new quarters, and a balanced wear algorithm is exactly the same. However, in all quarters lived on students, the SSD disk fragmentation issue came out.

Each dormitory students will not be fully stable, such as over a few days, some quarters of some students to leave school. He has vacated the beds, and the school will arrange students live in. However, it was arranged new dwell time management hostels was a bit confused, because every room once lived, what room will be free bed?

At hostel administrator hasn't completely straight, and the principal's Office prior to instruction (balanced wear algorithm), hostels have lived one by one. Poor administrator complete Halo, he can only look back completely renewed presentation of all the student dormitories are called out, and then one by one the arrangements into quarters. If this kind of thing happens in the SSD hard drive, obviously, that is the SSD disk fragmentation causes the system to slow down, because internal ongoing data processing.

Of course, the problem is not only that, when an administrator had to reschedule a students ' dormitory, the school head teacher requires an administrator to their class of students to find out the class. Whereas in previous quarters resize operation, the class of students scattered throughout the building, all floors in some rooms. To be notified to them, the administrator must be a climb to the top. If this is reflected in the SSD hard drive, the result is the application software to run slowly. If you use a hard drive terminology, this is the SSD disk fragmentation effects software too many.

From the above description, when SSD hard drive capacity basic exhausted, SSD disk fragmentation most prone to confusion. Because administrators (SSD hard disk controller), the principal's Office (balanced wear algorithm), class (application) are not in conflict. If the hard drive is empty, it will cause some of the big deal is the number of occupied some quarters, at least not prone to repeatedly adjust the data results.

Then, from the above, there is no good way to solve this problem SSD disk fragmentation? currently, you can reduce the number of living quarters, for example do 1KB/1PAGE. But this hostel Administrator's daily task, tube 100 Hostels do not light, tube 400 hostels, all the more difficult every day.

Of course, if there is no equalization trouble will be much better, they must attack first, go to the quarters empty, but when you do so, which do not guarantee the life of SSD.

In short, the SSD disk fragmentation of current SSD, was a very difficult to solve. May increase the system cache is a good idea, but if you do not have a good algorithm (management), I'm afraid I will not have very good results. We still have to look forward to working more upstream manufacturers to design of genius, but in any event, the prospect or bright SSD, after all, no product appeared to have no problem, but we have previously discussed the situation is very extreme, users may not necessarily be met.

Wednesday, January 25, 2012

【 Weak current College 】 improving hard disk performance skills to explain



No matter what type of hard drive, the default settings in the system, may not be the best working state, so if you want to make your own hard work in the best State, learn to drive a set of related parameters are not to be missed. This does not, this article below on the operating mode from the hard drive, to commend both you can improve the performance of your hard disk, hoping you will help!

1, hard disk performance improve interface transfer rate

There is a computer motherboard is based on the Foxconn 865A01-PE-6LS i865PE chipset, using a hard drive is Seagate Kuyu 7200.7 120GB, when using a professional hardware test software to test the system hardware information, I came across your hard disk in this mode should have to UDMA5, instead of the standard SATA150 and work in this mode, the performance of the hard disk transfer rate is not expected that fast, this is why? we how to solve?

In fact, the hard drive is in different operating modes, hard disk performance data transmission speed is not exactly the same, while SATA interface, transfer rate should normally be than ordinary IDE interface transfer rate to be significantly faster; whereas in this case the SATA hard drive is not achieved expected in transfer speeds, is most likely that the computer motherboard SATA channel mapping has become a common IDE channel, in which case the SATA interface will be forced to consume computer motherboard IDE2 channel, SATA interface hard drive can only be treated as normal IDE hard drives to use, this is SATA hard drive transfer rates have not imagined so fast. To the SATA hard drive interface transfer rates revert to the normal state, we need the system's BIOS parameter sets the interface, the hard work performance mode adjustment to the appropriate State, the following are the specific adjustment steps:

First restart your computer system, the startup process must be timely and press the del key, to enter the system's BIOS parameter sets the interface; the interface after, find the corresponding settings for the SATA option, check the option value is set to "CompatibleMode" or "EnabledMode", if set to "CompatibleMode", SATA hard drive will be treated as ordinary IDE hard disk, in this mode the hard drive interface transfer speed will naturally be affected, so we must adjust the SATA option value as "EnabledMode", the only way to SATA channel is mapped into a separate IDE3 or IDE4 channel, in this mode SATA hard drive interface transfer rate to revert to normal; adjust parameter settings for the SATA option, you must perform the save operation, and then restart your computer system, to make the settings take effect.

2. hard disk performance is the correct viewing mode

We all know that, if the hard disk works in inappropriate work mode, the data transfer speed will be greatly reduced; that's how you can know what work you hard drive in what mode State? in General, we only through the Device Manager can easily view your hard disk performance recovery current working mode:

First click the "start"/"settings"/"Control Panel" command, open the System Control Panel window, double-click the system icon, go to the System Properties window, click the window's "hardware" tab and then the corresponding labels page, click the "Device Manager" button, open the system's device list interface;

Then in the interface found "IDEATA/ATAPI controllers" project, and then double-click the project following the "primary IDE channel" option, then the pop-up option settings interface, click the "Advanced settings" tab, and then in the corresponding tab page we can clearly see the hard disk performance current mode.

Of course, if you follow the above methods do not see the hard work mode, most of your computer system has been installed the driver VIAIDEMiniportDriver; in this case we only in the system tray area found "VIABusMasterPCIIDEUtility" control icon and use the left mouse button, double-click the control icon, and then in subsequent pop-up interface can view to hard disk mode. Once you find that hard work performance mode is not in standard mode, the hard disk transfer performance is significantly impacted, in which case we will need to enter the BIOS parameters settings interface, the hard work mode resize properly in order to let the hard disk performance in top working condition.

Small Tip: as the hard drive capacity gradually increases, operating mode settings are more and more important. Typically, large-capacity hard drive work performance tends to support the work of DMA transfer mode, in this mode the hard disk transfer rate and work performance is slightly more than the normal state, so the number to confirm your hard drive supports DMA mode, then you'd better use DMA mode instead of conventional PIO mode. Replace the work mode, as long as in Figure 1 interface will transfer mode to "DMA".







Tuesday, January 24, 2012

【 Weak current College 】 buy hard skills complete guide



In order to meet the network application performance needs of growing, we usually increase the number of the new server, share business, improve system performance, i.e. landscape scale. In fact, you can improve the existing server configuration to improve the overall performance of the server, which is longitudinal extension – because the server part of matching the performance of the server is critical. While the direct purchase of the hard drive to store data is affected the performance of the Server service.

Increasing server performance is to look for restricting server performance bottlenecks. Different application possible bottlenecks is different, some want to focus on processors, memory, and to focus on buying hard disk or network i/o throughput capacity; in that case, in which applications need to focus on the server hard disk bottleneck?

Communication Server (E-mail/messaging/VOD): fast I/O is the key to this type of application, choose the hard disk i/o throughput is the main bottlenecks;

Data warehouse (online transaction processing/data mining): large commercial data storage, cataloging, indexing, data analysis, high speed business computing, and so on, you need to have good network and disk I/O throughput capacity;

Database (ERP/OLTP, etc): the server running the database, you need to have powerful CPU processing power, memory capacity to cache data, at the same time there needs to be very good I/O throughput performance;

Other applications: applications focused on data query and network communication, frequently read and write the hard drive, then choose the hard disk's performance directly affects the overall performance of the server.

Factors influencing purchase hard drive

To purchase the hard disk of the target parameter, first of all it should be mentioned that the hard disk interface standard. Today's mainstream hard drive interface interface has two kinds: EIDE and SCSI, IEEE1394 interface, of course, in addition, the USB interface and FC-al (FibreChannel-ArbitratedLoop) Fibre Channel interface products, but very rare. Now almost all computer-based General UltraDMA/33/66/100 the standard IDE interface hard disk, it has the advantage of being able to offer a lower price, the penetration rate is very high.

At the same time, some low-end server used the IDE hard drive, at present, almost all server Board is integrated with the IDE controller, but in the middle and high-end server is generally used to connect low-speed peripherals IDE optical drive and hard drive generally use SCSI interface standards such as wave yingxin server generally adopts Ultra160SCSI hard drives provide higher disk throughput capacity. SCSI interface hard drive has a very low CPU usage, support more devices and the work of the task under obvious advantages, etc., are better suited for server applications, and of course the SCSI hard drive price is much higher.

However, the optional hard disk data transfer bottleneck of the system lies not in the PCI bus or interface rate, while in the hard disk itself, which is part of the hard disk mechanical and structural design, and many other factors.

Indicators measuring the purchase of hard disk

Measure hard disk performance indicators include:

Spindle speed

Spindle speed is one of the hard disk capacity in addition all indicators, the most notable performance parameters should also determine the hard disk's internal transfer speeds and sustained transfer rate of the first decision factors. Now choose the hard disk rotational speed are 5400rpm, 7200rpm, 10000rpm and 15000rpm. Judging from the present situation, 10000rpm SCSI hard disk with cost-effective advantages, is now mainstream, and hard drive 7200rpm hard drive and the following levels in the phasing out of hard drive market.

Internal transfer rates

The level of internal transfer rate is the evaluation of a purchase hard drive a decisive factor in overall performance. Hard disk data transfer rate is divided into internal and external transfer rate; typically refer to external transfer rates as well as burst data transfer rates (BurstdataTransferRate) or interface transfer rate, measured from the hard disk cache to output data, current Ultra160SCSI technology external transfer rates have reached 160MB/s; internal transfer rate also known as the maximum or minimum sustained rate (SustainedTransferRate) refers to the purchase of hard disk platters read write data speed, now most of the mainstream HDD 30MB/s to 60MB/s. Due to the hard disk's internal transfer rate to less than the external transfer rate, only internal transfer rate can be as real to measure hard drive performance.

Single-disc capacity

In addition to contributions to growth capacity, single-disc capacity of another important significance is to increase purchase hard disk data transfer rate. Single-disc capacity increase thanks to the increase in the number of tracks and track internal linear magnetic density increases. Tracks the number of reducing head seek times of great benefit, because magnet's RADIUS is fixed, tracks the number of means of shortening the distance between the track and heads from one track to another track takes place time will be shortened. This will help to random data transmission speeds increase. And track internal linear magnetic density of growth and drive continuous data transfer speeds are directly linked. Track the increase in linear density allows each track you can store more data on the disc in each circular motion and more data is read from the head to the hard drive's buffer.

Average seek time

Average seek time is the head moves to the data track time, this is a measure of the importance of choosing the hard disk mechanical indicators, usually in 3ms ~ 13ms, recommended average seek time is greater than 8ms SCSI hard drives don't consider. Average seek time and average latency (the decision entirely by speed) together determine the hardSpare head to find clusters of data is located. The time has a direct impact on the hard drive of random data transfer speeds.

Cache

Improve the capacity of the hard disk cache is an increase the overall performance of the purchase of the hard drive. Because hard disk's internal data transfer rate and external transfer speed. So you need to do a cache to speed adapters. The size of the cache duration for hard disk data transfer rate has a great impact. It has a capacity of 4MB 512KB, 2MB, or, even, for 16MB 8MB video capture, video editing, etc. require a large amount of disk input/output, large hard disk cache is the ideal choice.

Selected server hard drive

Know your server to buy hard disk performance indicators, the next natural to choose a suitable for this specific application servers hard drive to improve system performance.

Choosing high performance hard drive

Because SCSI has a CPU utilization is low, the number of tasks associated with operating efficiency, connection equipment, connect the long distance, for most of the server application, it is recommended that you use SCSI hard drive, and the newest Ultra160SCSI controller; for the low-end of a small server applications, can be up to date on your IDE hard disk and controller. Determines the type of interface and the hard disk, will focus on the aforementioned effect drives performance specifications, depending on the speed, single-disc capacity, average seek, cache, and other factors, combined with capital budget, selected price best buy hard disk options.

RAID technology

Redundant disk array and RAID system offers more than the usual disk storage more performance indicators, data integrity, and availability of data, especially in today's hard disk i/o is always lag behind the CPU performance bottlenecks and more prominent, RAID solutions to effectively compensate for this gap.

By the disk array data checksum mode, RAID technology is divided into different levels (RAIDLevels), with different technical features, the reader can refer to the manual selection.

In order to better improve the purchase of hard disk i/o performance, recommend using RAID technology, according to the characteristics of the application, to be frequently access read-write hard drive into a RAID0 RAID1 and RAID5 or; at present, can be adopted at the low-end server, such as wave yingxin IDERAID NP200; while in the middle and high-end server, recommended SCSIRAID controller and RAID controller on technical indicators, such as the CPU type, channel type and number, number, there is no cache battery back-up, etc.; Note: motherboard integrated RAID controller because there is no hard disk controller, which uses the SCSI on motherboard, hard drive controllers take more primary processor time, causes the server's processing ability is affected.

Hot technology

In addition to the evaluation from the performance indicators on the purchase of hard drive, but also hard disk failure rate, average non-fault operation and easy maintenance. In the specific application, you first should use long life, low failure rate of the hard disk, you can reduce the probability of failure and the number of times, this involves hard disk MTBF (mean time between failures) and data protection technology, MTBF value better, such as wave yingxin servers hard disk MTBF values of more than 120 million hours in General, and the hard disk drive S.M.A.R.T. shared by (Self monitoring, analysis and reporting technology), and similar technologies, such as seagate and IBM DST (drive self test) and DFT (drive health detection), saved on your hard disk data safety is of great significance.

In addition, once the hard disk is damaged, you should consider how to ensure that data is not lost, and reduce server downtime. RAID technology can be used to ensure data reliability and security, through the hard disk of the hot technology can guarantee to replace or repair the hard disk at the same time, the server still runs correctly. Current hot technologies in the high-end server is very common, has also been used as an important server grade. General in the server's hot technology parts have hard drives, power supplies, fans, PCI slots, and SCSI hard drives also have a dedicated support hotplug technologies SCA2 interface (80-pin), and SCSI backplane, you can easily choose the hard disk of hot pluggable.






Saturday, January 21, 2012

【 Weak current College 】 based hard disk model identify hard drives merits of method


First, see hard disk model identify hard drive — Seagate article

Seagate hard disk model identify is relatively simple, Seagate desktop-oriented market out of hard drive mainly BarracudaATA (new kuyu) series (including B a r r a c T u A d a A Ⅰ/Ⅱ/Ⅲ/Ⅳ and Ⅴ) and U series. It's important to note that Seagate has enabled an entirely new product naming rules, new kuyu hard drive series named Barracuda7200.7plus (8MB cache), Barracuda7200.7 (2MB cache) and Barracuda5400.

1, this product naming rules and Seagate high-end SCSI hard drive. They will replace BarracudaATA and U series. Type designation in specific hard drive, Seagate on 1 January 1999, after the production of hard disks are numbered ways by four components, namely: "product brand + size + capacity + interface types". Seagate.tif1.ST short, namely, representatives of manufacturers Seagate's initials.

2. a number that represents the size of the hard disk, for example in 3 representatives it is 3.5 inch hard drive.

3. by four or five digits, it represents the standard size of the hard disk, for example, on behalf of hard drive capacity 30620 as 30620MB.

4. from one to three letters, it represents the interface type of hard drive support. A representative of the IDE interface; AG representative laptops dedicated ATA interface; W and n represents the SCSI interface; W/FC representative fibre channel; AS representative SerialATA.

Second, see hard disk model identify hard drive – Maxtor articles

Maxtor since the introduction of diamond seventh generation, its product range of naming is more confusing, like Diamond series seven generations called DiamondMaxPlusD740X, and diamond sixth generation of naming is DiamondMaxPlus60, diamond eight generations series named changed to DiamondMaxPlus8 again so similarly named, diamond nine generations of naming is DiamondMaxPlus9. For diamond sixth generation to generation, as we have found that there is a diamond family feature the most obvious? is all diamond series hard drive is "in front of all DiamondMaxPlus". In contrast, all Maxtor's excellent and beautiful drill series hard drive is in front of identity, "and" DiamondMax only less than Diamond series "Plus". We understand these series name for understanding a product of new old has great benefits, and who all know diamond nine generations than seven generations of new, then it's single-disc capacity is also relatively high, equal, its disk performance certainly comparatively high. With Seagate hard drive models, Maxtor's family name and do not represent the exact meaning of the product's hard disk model, so the following is an example to illustrate how do I tell if a Maxtor hard drive model number: Maxtor.tif

1. one or two letters or digits representing Maxtor hard drive model identifier. 6Y representative diamond nine generations; representatives of diamond eight generations 6E; 6L representative diamond seven generations; 5T representative diamond sixth generation; 2R representative drill a generation; 2B represents the second generation of American drilling; 3 (40GB or below) or 9 (40GB above) table xingzuan generation; 4W represents excellent second generation; 4D (4K) represents a generation of new fireball.

2. the three-or four-digit number that represents the hard drive capacity. In the example above represents 40GB 040.

3. by a letter that represents the interface type of the hard disk. J or L represents UtralATA/133 interface; H represents UtralATA/100 interfaces; U representative UtralATA/66 interface; D represents UtralATA/33 interface. If the hard disk is P representative 8MB cache and is ATA/133 interface; Finally, if the letter is M represents the hard disk that supports SerialATA interface type.

4. by a single digit, it represents the hard disk in a number of physical heads, here in particular, to clarify, since eight generations, diamond, the last digit is "0" at all, do not represent any meaning. For other Maxtor hard drive model, I believe readers can also by analogy. For example, the representative 6Y080P0 this hard disk belongs to the diamond nine generations, capacity, support 80GB ATA/133, and it's data cache to 8MB.

Third, see hard disk model identify hard drive — Hitachi article

Combines IBM hard disk business unit after the new Hitachi hard drive more like the original IBM hard drive. IDE hard drive remains the desktop star and mobile star series, available in the popular Hitachi 7200rpm IDE hard disk including the tenglong generation (HitachiDeskstar180GXP) and sees the fourth generation (HitachiDeskstar120GXP). Just above this two kind of series name, you have to guess the name of Hitachi series numbered naming rules?, Hitachi is very easy to understand, the series name frontmost Deskstar hard drive series "desktop representative Star", in addition to desktop star, Hitachi has high-end storage market SCSI hard drive Ultrastar series and mobile storage market "mobile star" TravelStar products. Immediately after the series name is the series of the highest capacity of the hard disk, for example, the name of the fourth generation is the Dragon 120GXP representative that the series up to 120GB, similarly hard drive model namely the 40GV represents the series up to 40GB. The last two or three letters stand for hard drive speed, GXP represents a 7200rpm, GV represents a 5400rpm. On Hitachi series name clearly? however described above is only a large range of naming rules for specific product lifeName the rule? Let's take two examples: DLTA-307075 and IC35L080AVVA07. The first example is for the older IBM hard disk, since the beginning of the fourth generation, sees the original IBM (now Hitachi) hard drives enable the second example shows a named style.

DLTA-307075 details

1. "D" indicates that the device is disk.

2. by two letters, it represents the hard disk of the series designation. For example LT represents Deskstar75GXP or Deskstar40GV; PT representative Deskstar37GP or Deskstar34GXP; older hard disk model code: JN represents Deskstar25GP and Deskstar22GXP; TT 14GXP 16GP and representatives.

3. by a letter, this hard drive model number represents the interface type of the hard disk. Interface type has the following main categories: A, S, U, C. A = ATA; S or U = UltraSCSI, UltraSCSIWide or UltraSCSISCA; C = SSA (SerialStorageArchitecture).

4. by a single digit number that represents the size of the hard disk. 2 representative 2.5 (portable hard drive, Travelstar); 3 representative 3.5 (desktop or server-grade hard drive, the Deskstar or Ultrastar).

5. by 1 or 2 digits, it represents the hard disk spindle speed. 05 (5) represents 5400rpm; 07 (7) represents 7200rpm.

6. a three-or four-digit number that represents the hard drive capacity, one representative 75GB 075. IC35L080AVVA07 details IBM.tif1. " IC "representative" IBMCorporation "IBM; 2." 35 "expressed as 3.5 drive; 3." L "represents the height of 1 inch (T = 0.49, N = 0.37); 4." 080 "represents a capacity 80GB; 5." AV "on behalf of the ATA interface; 6." VA "is the identification number (Uniquecode) represents 120GXP series, if this is the" ER "represents 60GXP series;

7. "07" expressed as spindle speed to 7200 RPM. From the above hard drive models to explain the IC35L080AVVA07 is a Hitachi120GXP series 80GB products. In future all the hard drives of Hitachi products using this number, we can extrapolate.

4. see hard disk model s hard drive-Western data articles

Relatively speaking, Western Digital hard drive models are the most easily recognized. As the only Western data now, not IDE hard disk drive and SCSI 2.5 notebook hard drives, and other Western data such as Protégé series cancelled, all produced by name instead of Caviar, that is, as long as the data of the IDE hard drive West, belong to the "caviar" (Caviar) series. For a specific hard drive models, Western Digital, nothing similar to WD2000BB this. First of all: the frontmost "WD" two letters must everyone could guess that WesternDigital abbreviation. Secondly: intermediate 3 ~ 4 digits represent the hard drive capacity, the unit generally is 100 MB, for example here's 2000 represents the West data drive capacity as 200GB if it represents WD1200BB this product capacity 120GB, and so on. Third: the last two letters indicate hard disk spindle speed, interface types, and data cache size in these three areas of information.

As follows:

WD.tif1. previous letters originally mainly used to distinguish between hard drive speed, A representative of spin at 5400rpm hard drive of the caviar; B represents the speed of the hard drive 7200rpm caviar; E representative speeds 5400rpm hard drive old Protégé series; however, since the company launched the West data 8MB cache version of caviar hard drives, that a letter is more an interpretation, that is, if it is J, the generation of this hard disk cache is a 7200rpm 8MB, speed. For example, WD1200JB indicates that it is the capacity of the cache to 120GB, 8 MB of high-end "caviar" series hard drive.

2. the last hard drive model letter of General representative of the hard drive interface type. For example: A representative UltraATA/66 or earlier interface type; B represents the interface to UltraATA/100; W represents the data in the West that is applied to the hard drive A/V (digital audio) areas of the product. For SerialATA interface Western hard drive will be what kind of identity letter, the Crown has not been given instructions, we will not know.

5. see hard disk model identify hard drive — Samsung article

Samsung hard disk relative to earlier in this major brand of hard drive, gives us the feeling that the Samsung pay more attention to the low-end storage market. Their hard disk performance in General, product capacity is not high, but there is an obvious advantage is the hard drive's noise at work is relatively low. Overall, the Samsung hard disk including V-series and P series. V series including V series quiet hard drive, V5, V6, V7, V8, these hard disk rotational speed are 5400rpm. P series including P1 and P2, Samsung p-series hard drive 7200rpm speeds. As the Samsung hard drive product line relatively short, so their products hard drive model is also relatively simple, similar to an example here to specific instructions.

Example: SV2046D1. first identified as a WordMaster the "S" means that, on behalf of Samsung Samsung hard disk drive.

2. the second bit identifies the letter "V" or "P", it means that hard disk belongs to the series, on behalf of the V series V, p represents the P series.

3. the third identifies the four-digit number that represents the hard drive capacity, the unit generally is 10 MB, for example 2046, 20.46GB.

4. the last digit identifies the letter "D" or "A", it indicates that the hard drive interface type. As the Samsung just out of the IDE hard drive, so A or D represents ATA or UDMA interface types are the same. Currently, Samsung did not plan to hard disk 糞 erialATA, so for the serial interface of Samsung is the identifier that is also not know.

6. recognize the hard drive start

Really learn to look hard drive model, "know your hard disk", we can use several simple letter or number combinations to figure out the hard drive of the main characteristics of the information, which although not enough to determine whether a hard drive real performance, but in buying the hard drive, the user can use this to cross over to your own hard drive buying guide. Learn how to "see the hard drive model, know the hard drive", and talk about the hard disk, you can also show off a few more, others will be expert's perspective you yo.

In fact, for any drive, SCSI, IDE, or 2.5 mobile hard drive, to thoroughly understand it, but you should know it's probably performance features, we also need to pass the actual application and evaluation results to your hard disk has determined that it is good or bad, however, recognize the hard drive you can see from the "model" to begin the hard drive, you can try!





Friday, January 20, 2012

【 Weak current College 】 router testing technology introduction and test types and methods




The router forwards the packets by for network interconnect devices can support more than one Protocol (e.g. TCP/IP, IPX/SPX, AppleTalk), multiple levels of forwarding packets (such as data-link layer, network layer and application layer).

1. purpose and content of the test

The router forwards the packets by for network interconnect devices can support more than one Protocol (e.g. TCP/IP, IPX/SPX, AppleTalk), multiple levels of forwarding packets (such as data-link layer, network layer and application layer).

Router you need to connect two or more logical port, at least one physical ports. Router based on the received packet network layer address of the router routing table maintained internally determines the output ports as well as the next router address or host address, and rewrite the link-layer header. Routing tables must be dynamic maintenance to reflect the current network topology. Routers usually by and other routers exchanging routing information to complete the dynamic maintenance of the routing table.

(A) the router category

The current router classification methods vary. Various classifications to be associated, but not entirely consistent. Usually you can follow the router capability classification, structure, classification, network location, classification, function classification and performance categories. In router standard classification according to capacity in the main, the capacity is divided into high-end router and low-end routers. Backplane switching capacity greater than 20Gbit/s, the throughput is greater than 20Mbit/s router called the high-end router. Exchange capacity in these data the following routers become low-end routers. And this corresponds to router test code into high-end test specification for low-end router test code and.

(Ii) for testing purposes and content

By testing the router, you can learn what routers are able to provide the best performance, the router in different load behavior, model-based design of the network using a router, the router can handle parameters of unexpected traffic, router performance limitations, routers can provide a different quality of service, the router different architecture for functionality and performance impacts, router features and performance indicators, the effects of using a router is a network security, router protocol implementation conformance, and router reliability and router products strengths and weaknesses.

Low-end router device test mainly include: the General test, i.e. electrical safety testing; environmental testing, including high and low temperature, humidity and high temperature storage test; physical interface testing, test low-end routers might have the electrical and physical interface Testability; Protocol conformance test, the test protocol implementation conformance; performance testing, test the performance of the router; manage tests, test a router-to-no major network management function.

High-end router tests include: interface test, high-end router may have interface test; ATM protocol testing, test the ATM protocol requirements PP protocol testing, test the consistency of the PPP protocols; IP protocol testing, test IP protocol conformance; routing protocol test, the test route conformance; network management function test, verify that the test gateway functionality; performance and QoS router performance testing, test and validate QoS capabilities; network synchronization testing, test device synchronization timing capabilities; reliability tests to verify the device reliability; power supply test, the test machine power consumption, and other content; environmental testing, including high and low temperature, humidity and high temperature storage test.

These two test code as drafting unit, as well as the drafting of the time, the organization is different. In addition to the above tests, it is recommended that you test in considering the test items listed below. (1) functional testing: mainly to verify product availability of the design of every feature. (2) the stability and reliability of the test: take a heavier load on the general approach to the assessment and analysis equipment for a long time, high load capacity of the run. (3) interoperability test: different network between products must be able to interoperate. Interoperability testing on a network product be in a variety of different manufacturers of network products interconnected network environment to work in, for example, verify that the routers and Cisco product interoperability, switches and Cisco, Lucent, 3Com, Intel, interoperability, etc.

2. test method

Router test method is typically divided into local test method, the distribution of the test method, remote test method and the collaborative test method. Because of space limitations, this article does not describe the characteristics of other test method and the scope of application, list only the most commonly used in testing the router to the remote test method.

Among them, the control of the observation point (PCO): usually consists of two first-in first-out (FIFO) queue, its function is similar to a pair of input and output ports, and send a command to the end of the queue, from the other end of the same queue receive answer signal; the measured entity (IUT): ItemUnderTest; next Tester (LT): located in the tested entity underlying PCO and the test layer interaction test system known as the lower test system.

III. testing category

Integrated in the test above, router tests generally can be divided into the following categories: functional testing, performance testing, stability and reliability testing, conformance testing, interoperability testing, and network testing.

(A) functional testing

Router functions can often be divided into the following areas.

(1) interface function: this function used as a router to connect to the network. Can be divided into the LAN interface and wide area network interface. LAN interfaces including Ethernet, token ring, FDDI, token ring, bus, and other network interfaces. WAN interfaces including E1/T1, T3, E3/DS3, generic serial interface (convertible into X.21DTE/DCE, V.35DTE/DCE, RS232DTE/DCE, RS449DTE/DCE, EIA530DTE) network interface. (2) communication protocol function: the function responsible for handling communication protocols, including TCP/IP, PPP, X.25, frame relay, etc. (3) packet forwarding: this feature is mainly responsible for the content according to the routing table in the port (including logical port) to forward packets between and overwrites a link-layer header information. (4) routing information maintenance function: this function is responsible for running the routing protocols to maintain the routing table. RIP routing agreement may include, OSPF, BGP, and other protocols. (5) management control functions: router management control features include five functional, SNMP agent feature, Telnet Server features, local management, remote monitoring and RMON function. There are several different ways to control the management of the router, and allow the record to the log. (6) security features: for complete packet filtering, address translation, access control, data encryption, firewalls, address assignment, and other functions.

The router is not necessary for the above functionality is fully realized. But because the router exists as a network device, the minimum set of features, the feature set to the minimum required functionality, the router must support. Because the vast majority of functional tests can be interface testing, performance testing, Protocol conformance test and network management tests cover, so the router function testing in General can only on other test cannot cover features for confirmatory testing. Router functionality testing generally use remote test method.

(Ii) performance testing

The router is the IP network core equipment, which have a direct impact on the performance of IP networks network size, network stability and network scalability. Because the IETF does not have the router performance testing for special provisions, in General can only follow RFC2544 (BenchmarkingMethodologyforNetworkInterconnectDevices) for testing. But the router is distinguished from ordinary simple network interconnect devices, performance testing should also be coupled with the router's unique performance testing. For example routing table size, routing protocols, such as the convergence time.

Router performance tests shall include the following indicators.

(1) throughput: testing the router packet forwarding. Usually a router without packet loss conditions forwarded packets per second, the limit is generally can find the ultimate point of dichotomy. (2) the delay: testing the router in throughput range from the receipt of the package to forward a package of time intervals. Delay test should be repeated 20 times and then take the average. (3) the packet loss rate: testing the router discards the packet under different load account receives a proportion of the package. From different load usually refers to the wire-speed throughput test (line transmission package highest rate), the step size generally use wire-speed 10%. (4) back-to-back frames: testing the router receives the minimum package interval transit packet loss condition can handle the maximum number of packages. The actual test router caching, if the router is equipped with wire-speed capabilities (throughput = interface media wire-speed), this test does not make sense. (5) system recovery time: testing the router in resuming normal work after overload. Test methods can be used to send a router port throughput of 110% and linear speed of the smaller value, 60 seconds after the rate drops to 50% of the moment to the last packet intervals. If the router is equipped with wire-speed, the test does not make sense. (6) the system reset: testing the router from software reset or off electrical reset to normal working time interval. Normally refers to forward data throughput.

In the test above RFC2544 in indicators should take into account the following factors.

Frame format: it is recommended that you follow the RFC2544 frame format test; frames long: from the smallest frame length to MTU sequentially incremented, such as Ethernet with 64, 128, 256, 512, 1024, 1280, 1518 bytes; certification receives frames: exclude received non-test frame, such as control of frame, frame, etc. routing updates; broadcast frames: Verify broadcast frames on the performance of the router, the above tests in the test frame in 1% of the broadcast frames and then testing; management frame: authentication management frame to router performance, the above tests in the test frame inclusion in a management frames per second and then testing; routing update: routing updates that next jump port change effect on performance; filter: filter condition on the performance of the router, it is recommended that you set 25 filter condition test; protocol address: testing the router to receive random is 256 network address performance impact; the bi-directional flow: testing the router port bidirectional transceiver data performance impact; multiport test: considering the flow distribution across the connection or non-whole connection distribution performance impact; multi-protocol testing: consider the router at the same time to handle multiple protocols, performance impact; mixed bag long: in addition to testing the proposed incremental package long, check for mixed packet length on the performance of the router, RFC2544 requires contains all the test package has no longer on the hybrid packet length in the proportion of the package for long. The author recommends in accordance with the actual network in each package long distribution test, for example, in the absence of special application requirements for Ethernet interface can adopt 60 byte packet of 50%, 128-byte packet 10%, 15% of the 256 byte packet, 512-byte packet 10%, 15% 1500 byte packet. In addition to the above recommendation of test entry RFC2544 also recommends test follows.

① Routing shocks: routing shocks on router forwarding capacity. The extent to which the route shocks per second update routing number based on network conditions. Routing update protocol can use BGP. ② routing table size: the size of the test routing table. Backbone routers typically run BGP routing table contains routes worldwide. Generally require more than 10 million records routing, it is recommended that you enter by using BGP to test the export route count. ③ clock synchronization: contains the corresponding ports such as POS-routerWithin the clock accuracy on test and synchronization capabilities. ④ Protocol convergence time: test routing change notification to the time of the whole network. The index while the stand-alone performance for the router, but generally can only test on the network, and will vary by configuration changes. You can configure in network is completed by examining the indicators to measure the overall network performance. Test time should be according to the specific project and the test target. Generally considered that the test time should be between 60 to 300 seconds. In addition to General can according to user requirements and test target for setting options. Router performance tests generally take remote test method.

(Iii) the conformance test

Router conformance testing usually uses the "black box" method, tested device IUT is called "black box". Test the system by controlling the observation point of the PCO and the tested device interface.

Different test events through different PCO to control and observation, and follow the answer whether compliance with norms, namely timing relationships and data matching, test results can be passed, failed, no results 3. A router is a complex network interconnect devices, various communication layers on a variety of protocols. For example, the interface of the physical layer and link layer protocols, IP/ICMP, and other Internet layer protocol, TCP/UDP transport layer protocol, such as Telnet/application layer protocols such as SNMP and RIP/OSPF/BGP and other routing protocols.

Protocol conformance testing should include all of the implementation of the router. As a result of the test content variety testing complex, in tests can select important agreements and are concerned about content testing. As the backbone of the Internet road there may affect global routing, so router tests should pay special attention to the routing protocol conformance test such as OSPF and BGP protocol. As a result of conformance testing can only choose a limited test cases to test, generally does not cover all the contents of the agreement. So even if the test is also no guarantee that the device fully achieved agreement all content, so the best way is in a real-world environment. Router conformance testing generally use distributed test method or remote test method.

(Iv) interoperability test

As communication protocol, routing protocols, is complex and has many options to achieve the same protocol router does not guarantee interoperability Interop. And because conformance testing capabilities are limited, even through Protocol conformance test may not be able to guarantee the full implementation of the agreement. Therefore it is necessary for interoperability testing equipment.

Interop test is actually testing the consistency in the instrument needs to be replaced by the exchange of interoperable equipment, select some important and typical interconnection configuration, observation of two devices can works as expected.

(V) the stability and reliability testing

Since most routers need 24 hours a day, 7 days a week, as the backbone of the Internet's core router device stability and reliability are particularly important. So users need to understand the dew from the stability and reliability.

The router's stability and reliability are hard to test. Generally two ways: (1) manufacturers through key parts of the degree of reliability and backup system reliability; (2) the user or by a large number of identical products manufacturers in the use of failure rate statistics product stability and reliability. Of course, may also be accessed at one time, the results of running the test require to guarantee the reliability and stability of the router.

(Vi) network management tests

Network Management test generic test network management software on the network, and network device management capability. Because the router is the IP network core equipment, so you must test the routers on the network management support. If a router with network management software, you can use the attached network management software to check for network management software implementation of configuration management, security management, performance management, billing management, fault management, topology management and view management functions. If the router does not come with network management software, you should test the router on the consistency of the SNMP protocol implementation and realization of the MIB. Because the router needs to implement many of the MIB, each MIB contains large amounts of content, it is difficult to completely test the MIB. Generally available through sampling important to check the router MIB entry on an implementation of the MIB.


Thursday, January 19, 2012

【 Weak current College 】 server use mistakes and how to properly use


Server core is vital to ensure that the device to the network server to high-performance, stable and sustained work has always been a top concern for users. However, in the this issue, we found a lot of users do not have properly configured your server so that the server does not work in the best State. Typically the server configuration of common errors in performance in the following ways.

The server uses the common misconception
Error: server with redundant features without

A lot of high-performance server provides array function, but because users do not understand it, purchase only one hard drive, no data redundancy, lost for storage, security and performance optimization

Myth two: high-end servers using low configuration scenarios

Users purchase high-end server itself can meet very high performance requirements, but to configure a low-speed, smaller capacity hard disk and a small amount of memory that causes the server's overall performance significantly lower

Myth three: do not know the server performance bottlenecks resulting waste of resources

User for the server's understanding excessively one-sided, unilateral recognition of the importance of certain components, give your all to turn inputs, while ignoring the other components of the optimization work that resulted in the performance of these components is not play. From the above analysis can be seen, whether as a result of the user for the server capabilities are not fully understood or in the use and configuration problems, many of the operational status of the server is not optimal. According to the statistics show that the industry in 80 per cent of the server is not optimized design, 90% of servers are not scheduled for system performance monitoring, 95 per cent of the server does not have complete data redundancy, security measures, almost half of the server is not a data backup solution. The server is in a State of health, through the: power supplies, fans, hard drives, controllers, cables, network adapter, CPU, and other critical parts not used hardware redundancy and system security; use of low-speed, compatibility of equipment components, unreasonably configuration memory, CPU, hard drive controllers, and so causing the performance decrease; do not use any network server management software and hardware, system downtime in the event of a failure, management has serious flaws.

How to properly use the server

In order to improve the health of the server, Transocean waves is the company specializes in some disadvantages and feasibility of the proposal was presented:

As part of the hard disk storage to increase redundancy hard disks and array control cards, provide data redundancy, and substantially increase system IO performance.

Increased redundancy for the server, use of CPU SMP (symmetric multiprocessor) technology to improve system performance, and increased the Centre deal with redundancy.

Increase the redundant network card, improving network IO performance, to a certain block network connection fails, the server will not be disconnected from the network.

For the server to increase redundancy power module, server power supply capability, when a power supply module when problems occur, the system will not be because of power outages and downtime.

Increasing the RAM for the server to meet the operating system and increasing optimization and application requirements, and improve server performance.

In addition, the need for the server's overall performance balance, avoid performance bottlenecks and security implications. From the CPU processing power, memory size, data redundancy and data storage capacity and network IO IO performance, power supply, fan cooling capacity, system fault alarm capability, charged failover capability to do sections are devoted to the optimization work, e.g. by increasing the hard drive, array cards, increase the cache array card, optional hot-swappable hard drive bracket, use array card more than one channel, select the most suitable array level to meet various read and write performance to optimize storage subsystem.

Depending on your operating system, the number of users, applications, the number of CPUs used to determine the minimum memory capacity, increased remote control card online diagnosis memory running process failures.

According to the system the necessary processing power, system CPU redundancy requirements, number of users, the scope of application to determine the number of CPUs, use operating system performance monitoring software and network management software to detect CPU occupancy will be increased or decreased depending on the situation.

Use the AFT (NIC redundancy), ALB (network load balancing), FEC (Fast Ethernet) and other NIC redundant technology to increase server network IO performance.

Increase the redundant power supply module valid security server power supply to prevent damage due to a single power supply module causes system downtime, increase or decrease redundant fans ensure server system cooling effect, prevents the server from overheating and failure.

In short, carefully check the existing bottlenecks and deficiencies, tailored to optimize each part, eliminate barriers of the bondage performance, fully protected input-output ratio, these can give you the right fit with your server resources, avoid mistakes in entering the server use.




Wednesday, January 18, 2012

【 Weak current College 】 ADSL subscriber line maintenance fault analysis


ADSL access technology dramatically improves the user's Internet access speed, solve the past presence of network bottlenecks. But as the rapid increase in ADSL user, user fault complaint also has significantly increased, in these user fails the complaint line fault accounted for a considerable proportion. This phenomenon is due mainly to the direct use of the existing ADSL PSTN users access network business broadband access, the original lines were not suitable for high frequency signal transmission problems causing the ADSL subscriber line failures occur frequently, haunt the majority of Internet users and track maintenance personnel.


Causes the line ADSL user failure reasons mainly can be divided into two broad categories: the first is attenuation class line failure; the second category is interference class line fault. Attenuation class line fault main findings are: subscriber line is too long, the line material transmission performance difference, circuit breaker, short circuit and oxidation, bad connector, bridged taps, lines exist with Induction coil, etc. Disturbance class line fault main findings are: Central Office or client device background noise is high, radio interference, cables in different lines on the signal crosstalk, household appliances, electromagnetic interference, motor or electric locomotive caused by electromagnetic interference, weather causes interference (such as lightning), etc. While the user failures are often some of the above factors together. The following routine maintenance for some ADSL failsafe for further analysis.

Failsafe: bad, connector oxidation

S a weather phenomenon ADSL users call normal, but no Internet access, Modem activation indicator has been in Flash, users will call on the off-hook State to make the Modem activates lights correctly.

«Test section of the measuring Chamber in Central Office will subscriber line mixed line (that is, user A, B cable shorted), in the user-side measurement lines loop resistance Ω for 4653, in user floor outside junction Office measurement lines loop resistance Ω for 673.

«Analysis section of the call from user can properly concluded that the user does not exist outside line, short circuit breaker. At the junction of ring resistance test results to normal, but in the user-end test to ring resistivity index not working properly after the inspection found that the user is determined by three under huxian paragraph lines threaded together, joint replacement, the oxidation of serious users after fault disappears under huxian, phone and Internet users are normal, and no drop off.

«Summary section of the user drag test is maintaining Central ADSL lines, an important test, suitable for the opening of the line ADSL business ring resistivity index should be less than 900 Ω, depending on the test to the subscriber line loop resistance derived from the line length (with loop resistance per unit length divided by the number of lines on the DC resistance) and the user's actual line length deviation of the larger the circuit on a bad, connectors, such as oxidation. faults It should be noted that the use of 112 measurement automatic test users outside loop resistance, since its test online imposed DC current will breakdown some of the not very serious oxidation film of oxide film to get good results, this is also why the user will in this case placed off-hook State telephone Modem can activate reason (PSTN users pick up after the current line).

Failsafe 2: user inlet attenuation is too large

S a weather phenomenon ADSL user application Mbit/s downlink bandwidth, but unlike other 2Mbit/s bandwidth of a user compare slower.

«Test section of used instrument in the user-end test, the test results display the user maximum downlink direction can be opened at 1.3Mbit/s, loop resistance number 820 Ω; in user outdoor luminaires, test the user port downlink direction maximum opening speed 2.9Mbit/s, loop resistance 795 Ω, under users, no long 100m huxian.

«Eclipse from test results analysis, user opened rate lower mainly due to the user's next huxian signal attenuation is too large, will be replaced by the following huxian copper twisted pairs of the user-side measurement to downlink maximum opening speed 2.6Mbit/s, users go online rate gains.

«Summary section of the current user inlet (commonly known as "the next huxian") most of the parallel single-ply yarn, material for the ferro-alloy material, its transmission performance of high-frequency signal and anti-interference performance well below the twisted-pair copper material, it can cause line attenuation is too large to open rates low by user or activate the difficulties. Recommend copper material twisted pair, and the length should not exceed 100m.

Failsafe III: huxian wiring unreasonable cause signal interference serious

S a weather phenomenon ADSL users reflect the Internet slow download speed only 7kbit/s, and frequently dropped.

«Test section of used instrument to test the user side, the downlink direction maximum opening speed 232kbit/s, the actual opening rate only 96kbit/s. The user's next huxian length 150m, is parallel to, and to fixed-line, outdoor wire rod, indoor heating pipe, and many other places. Replace a user with a twisted pair huxian and to minimize the length to 100m, while not in other parts of the winding, user fault disappears down maximum opening rate reached 2.6Mbit/s, the actual opening of downlink speed for 512kbit/s download speed also increases, to 60kbit/s.

«Analysis section of the appropriate opening of ADSL business subscriber line ADSL band (10kHz-1.1MHz) background noise power spectral density should be less than-94dBm/Hz. In the user-end test to the background noise is too high, but in the sense of the next junction at huxian post test results to determine the normal user failure is largely due to excessively long and cabling under huxian, introduces too much interference signal,Causing the user to open rates low and often dropped.

«Summary section of the excessively long and under huxian excessive winding will lead to the introduction of its like antenna that external electromagnetic interference signal and the parallel lines of anti-interference ability, further exacerbated the problem.

Failure case study 4: device port high background noise

] [A type respondent access phenomenon ADSL Central Office equipment users generally open rate is low, line ring resistance of more than 500 Ω users is very difficult to activate.

«Test section from the failure of performance, you can determine the problem may be with the Central Office equipment, in the measurement units to disconnect a user outside line directly test the DSLAM ports found in this location on the direction to the downward direction on the maximum rate at which can be opened only 3.4Mbit/s, further testing background noise power spectral density and ADSL broadband noise band.

Analysis of the probabilities in measuring s test to user port downlink maximum opening rate only reached 3.4Mbit/s, and test point unit port is very near distance, line attenuation can be ignored, so the cause of this failure phenomenon is mainly due to signal interference. Select the measurement units to disconnect a user outside line for testing can be queued outside interference, which is used to verify the DSLAM port noise leakage. Suitable for the opening of the subscriber line ADSL business ADSL broadband noise band index reference values are: uplink direction is less than-50.23dBm; downlink direction is less than-40.23dBm. In this case the background noise of port power spectral density while there is no more than the given point of reference values-94dBm/Hz, but the entire ADSL band especially downward direction of noise power is high, resulting in a downward direction of broadband noise indicators are not eligible.

«Summary section of the work of taken G.DMT ADSL subscriber line in the measurement units to disconnect a user outside line test after being down maximum opening rates should be greater than 7Mbit/s, when the test data is less than this value describes the DSLAM user port higher noise floor or on-line distribution cable attenuation. So on ADSL faults user maintenance should first ensure that the test in the measurement of indicators is normal, thereby avoiding blind outside line test.

To sum up, in order to ensure the normal use of ADSL users, reducing the failure rate of complaint, the user should pay attention to the user to install cabling, avoid some can cause users to fail the wiring in the future.

On the other hand, as the ADSL business of large-scale, past PSTN line maintenance tools and test methods have been insufficient to find the cause of the failure of broadband users, the need to introduce some new test method, for example, the background noise power spectral density test, broadband noise testing, performance testing of longitudinal balance line, cable crosstalk specifications testing, etc. Only through extensive testing to find the real obstacles to ADSL users.




Tuesday, January 17, 2012

【 Weak current College 】 LAN optimization of six of the cheats




Reasonable to set the server's hard disk

Use the LAN Office users, often using the network to print materials and access to the file. For some reason, network access may not be working properly at this time, we tend to be incorrectly believe that result in reduced network speed may be a network bottleneck happened some equipment, such as network cards, switches, hubs, etc, in fact, the greatest impact on the network or server hard disk speed. Therefore properly configured network server's hard disk, the entire LAN network performance is greatly improved.


According to the rules for connection

We all know that connect LAN every computer is achieved with twisted pairs, but does not use twisted pair to two computers simply connect each other, we can achieve the communication objectives, we must follow certain rules for wired connections. I have tried two are 100 metres away from the computer to connect with twisted pairs, enabling communication, but no matter what efforts does not connect successfully, then the expert advice, twisted pair connection distance must not exceed 100 m. In addition, if you need to connect more than 100 meters of the two computers, you must use a conversion device. In the connect conversion devices and switches, we also have a jumper. This is because Ethernet is used in General for two pair, arranged in 1, 2, 3, 6, if you are not using two pairs of lines, but the original pairing using separate use of the line, they would form crosstalk, network performance has a greater impact. 10M network environment the situation is not apparent, 100M network environment if the flow is large or long distance, the network will be unable to Unicom.

Proper use of the "bridge"-type equipment

The "bridge"-devices are usually used in the same segment of network devices, routers are used for different sections of the network devices. I have installed the unit to which a microwave networking equipment, physical device after online debugging, Unicom Server always prompts the current segment number should be the other side of the segment number. The server network segments, and each other to consistent, Server alarm disappeared. ! This is set to have a bridge in the nature of the device. Later and another place for installation of microwave networking equipment, swap the other a manufacturer's product, in the connection before we could be on either side of the segment number changed to consistent, when loaded on the device, server and alert: current routing errors. Modify the side segments, alarm disappeared. This shows that distinguish the "Routing" and "bridge" of the device, set the network parameters is important.

The strict implementation of grounding requirements

As a result of the transfer in the local area network are some of the weak signal, if the operation is slightly improperly or not in accordance with the network device specific operational requirements to do so, it may appear in the networking, serious interference information can cause the entire network is blocked. Especially some network switching equipment, as it involves a remote line, on the ground of the very strong, otherwise the network devices will not reach the required connection rate, resulting in a networking process produces a variety of inexplicable symptoms. Had no intention of the router's power plug in the AC outlet, the result is not 128KDDN hotline and Internet connectivity. The Bureau to examine the lines are quite normal, check the router's zero voltage, found no, change back to the socket on the UPS, everything returned to normal. In addition, the router's power plug grounding-broken, causing packet loss, often do Ping connection is good and bad, replace the power supply cable is all normal. Thus, we use a network device, be sure to the conditions provided for in the device, it will give our work brings a lot of trouble.

Use good quality and speed of network card

In the local area network, between the computer and the computer cannot communicate is quite normal, causing the failure cause there may be many. I have statistics, LAN failures related to most of the network card or adapter is not correctly installed or the network cable is bad, but may be older network cards, could not be properly recognized by the computer and also have a network adapter installed on the server, and withstand the impact of large capacity data, end-of-life, etc. Therefore, in order to avoid this phenomenon, we must be willing to invest, if the network card is installed on the server, be sure to use good quality network card, because the server is not run continuously, only the quality of the network card to work for "for a long time," because the server transmit data capacity is large, so we purchased card capacity must match in order to achieve "good horse with good saddle."

Reasonable set switches

Switch is a LAN an important data communications equipment, correct use of switch reasonably also can improve network performance in data transmission. I had to switch ports configured for full duplex, and the server 100M installed a model for the network card, install later Intel100MEISA all normal, but in large discharge load data transfer, the speed becomes very slow, and finally found this card does not support full duplex. The switch ports to half-duplex, fault disappears. This shows the switch port and network card speed and duplex mode must be consistent. There are many adaptive network adapter and the switch, follow the principle that should be properly adapted to the speed and duplex mode, but in fact, due to the inconsistency of the brand, are often not correctly to achieve full-duplex mode. Obviously the server network card to full-duplex, but switch duplex lamp is not lit, the only manual force setting. Therefore, we set up a network device parameters, be sure to reference the server or other network device on the workstation to enable parameters of each device matching jobs.


Monday, January 16, 2012

【 Weak current College 】 what is an optical burst switching technology




Introduction--current optical network switching technology there are three main: optical switching OCS (OpticalCircuitSwitching), optical packet switching OPS (OpticalPacketSwitching), OBS OBS (OpticalBurstSwitching).

Three optical switching technology

Current optical network switching technology there are three main: optical switching OCS (OpticalCircuitSwitching), optical packet switching OPS (OpticalPacketSwitching), OBS OBS (OpticalBurstSwitching).

One of the most mature of the light path switching OCS, network needs to establish a connection request from the source to the destination of the light path (each link will need to assign a professional wavelength). Exchange process is divided into three phases: ① link establishment phase is bi-directional bandwidth application process, you need to pass requests and responses confirmed that the two processes. ② link remains stage, the link is always communication both in use, do not allow the other communicating party sharing the link. ③ link demolition phase, any party disconnect signal, first issued in another party received disconnect signal after confirmation, resources are truly free.

In the long run, all-optical packet switching OPS is optical switching direction of development. OPS is a connection-oriented way, using a one-way exchange of booking mechanism for data transfer without need for routing and distribution of resources. Packet payload closely follow the group header in the same transmitting light path, a network node caching payload, waiting for the grouping with packet destination header processing to determine the routing. Compared to OCS, OPS has a very high level of resource utilization, and a strong ability to adapt to unexpected data. But also the existence of two near insurmountable obstacles: the optical buffer technology has not developed; the second is in OPS Exchange node, the more input the exact synchronization groups. So optical packet switch is difficult to achieve in a short period of time.

In 1997, by ChunmingQiao and J.STunnor respectively a new optical switching technology — — optical burst switching OBS as from circuit-switched to packet-switched technology transition technologies. OBS combines circuit-switched and packet-switched their benefits and overcome some shortcomings of both, has attracted more and more people's attention.

Burst

OBS in "burst" can be considered by the number of smaller with the same export edge node address and same QoS requirements of data grouping consisting of very long data is grouped, the data group can come from traditional IP network in the IP packet. Burst is optical burst switched networks in basic Exchange unit, it consists of a control group (BCP, BurstControlPacket, functions as a packet-switched in the group header) and burst BP (NET load) made up of two parts. Burst data and control groups on the physical channel is a separate group for each control corresponds to an unexpected data, this is also the core optical burst switching design concept. For example, in WDM systems, control group consumed one or several wavelength, burst then consume all other wavelengths.

The control group and burst data separation is control group can be the first to burst data transfer to compensate for the control group in the Exchange node in the processing of O/E/O converter and power processing delay. Then a burst in all-optical switching node in exchange for transparent transmission, thereby reducing on optical buffer needs, even reduced to zero, to avoid the current optical buffer the shortcomings of the technology was not ready. And, because of the control group size much less than the size, you need to burst O/E/O converter and electronic processing of data greatly decrease, reducing processing delays, greatly improving the exchange rate. This process is similar to an outbound tours, team before departure, carrying a staff members with relevant information, arrived one day earlier border clearance procedures and ticket, etc., before departure, then tours the visitors of the time also simplifies the process.

Edge and core node

Because optical fiber to the home network in the bottlenecks are now mainly used for backbone and metropolitan area networks, and user-end remains the traditional electric IP network. Optical burst switched network mainly by light core node and edge nodes of electricity. Edge nodes are mainly responsible for IP packet access, classification, packaging and dispatch, and reverse burst of receiving and remove frames. Entrance to the edge node data input through the line card, depending on the IP packet's destination address sorting after Assembly, forming burst data, and extract the corresponding group header generation control group, which breaking data cache to the burst queue waiting for scheduling. When a sudden burst of data in the send queue on the queue head, calculation of burst data with the appropriate control packet offset between time and feedback to control packet generator, and then issue the control group, the control group including time offset, burst length and specific route information. When the offset of the time expires, the issue of the burst. Export edge node simply burst apart, and which IP data extraction.

Core node functionality is to control the Group find, Exchange, burst monitor (such as blocking probability, delay, etc), it is assumed that each wavelength optic support for K + 1 (single wavelength used for transmission control block BCP, K a wavelength used for transmission burst). BCP for transmission wavelength in core nodes need to be O/E converter and power routing table lookups, light switching matrix control, last update BCP data for E/O converter. The rest of the k-wave transmission burst data in core node does not need O/E/O transform, the entire Exchange transmission in optical domain, ensuring the transparency of the data. Figure in optical switching matrix of fiber delay line is used to cache burst data (you can only cache a finiteLong time), wait for the control group, by setting the appropriate offset time offsettime can make sudden data does not need to be cached in the intermediate nodes, directly through the OBS networks, which in turn can cancel the optical delay lines. Besides optical delay lines can also be used to resolve competition issues, reducing conflict and achieve WDM layer of QoS (quality of service). When the burst into optical switching matrix, the control unit control of optical switching matrix to select the corresponding output wavelength.