Hard drive interfaces: SCSI, SAS, Firewire, IDE, SATA. Unparalleled Serial Compatibility SAS Connector Types

Introduction

Look at modern motherboards(or even some older platforms). Do they need a special RAID controller? Most motherboards have 3 Gigabit SATA ports, as well as audio jacks and network adapters. Most modern chipsets such as AMD A75 And Intel Z68, have support for SATA at 6 Gb / s. With so much support from the chipset, a powerful processor, and I/O ports, do you need additional storage cards and a separate controller?

In most cases ordinary users can create RAID 0, 1, 5 and even 10 arrays using the built-in SATA ports on the motherboard and special software, and you can get very high performance. But in cases where a more complex RAID level - 30, 50 or 60 - is required - a higher level of disk management or scalability, then the controllers on the chipset may not be able to cope with the situation. In such cases, professional-grade solutions are needed.

In such cases, you are no longer limited to SATA storage systems. A large number of special cards provide support for SAS (Serial-Attached SCSI) or Fiber Channel (FC) drives, each of these interfaces brings unique advantages.

SAS and FC for professional RAID solutions

Each of the three interfaces (SATA, SAS and FC) has its pros and cons, none of them can be unconditionally called the best. The strengths of SATA-based drives are high capacity and low price, combined with high data transfer rates. SAS drives are renowned for their reliability, scalability, and high I/O speeds. FC storage systems provide a constant and very high data transfer rate. Some companies still use Ultra SCSI solutions, although they can handle up to 16 devices (one controller and 15 drives). Moreover, the bandwidth in this case does not exceed 320 MB / s (in the case of Ultra-320 SCSI), which cannot compete with more modern solutions.

Ultra SCSI is the standard for professional enterprise storage solutions. However, SAS is gaining popularity as it offers not only significantly more bandwidth, but also greater flexibility in working with mixed SAS/SATA systems, which allows you to optimize cost, performance, availability and capacity even in a single JBOD (set of disks). In addition, many SAS drives have two ports for redundancy. If one controller card fails, then switching the drive to another controller avoids the failure of the entire system. Thus, SAS ensures high reliability of the entire system.

Moreover, SAS is not only a point-to-point protocol for connecting a controller and a storage device. It supports up to 255 storage devices per SAS port when using an expander. Using a two-tier structure of SAS expanders, it is theoretically possible to attach 255 x 255 (or a little more than 65,000) storage devices to one SAS channel, if of course the controller is capable of supporting such big number devices.

Adaptec, Areca, HighPoint, and LSI: Four SAS RAID Controller Tests

In this benchmark, we examine the performance of modern SAS RAID controllers, which are represented by four products: Adaptec RAID 6805, Areca ARC-1880i, HighPoint RocketRAID 2720SGL and LSI MegaRAID 9265-8i.

Why SAS and not FC? On the one hand, SAS is by far the most interesting and relevant architecture. It provides features such as zoning that are very attractive to professional users. On the other hand, FC's role in the professional market is declining, with some analysts even predicting its complete demise based on the number of hard drives shipped. According to IDC experts, the future of FC looks rather bleak, but SAS hard drives can claim 72% of the enterprise hard drive market in 2014.

Adaptec RAID 6805

Chip manufacturer PMC-Sierra launched the "Adaptec by PMC" series of the RAID 6 controller family in late 2010. Series 6 controller cards are based on the dual-core ROC (RAID on Chip) SRC 8x6 GB controller, which supports 512 MB cache and up to 6 Gbps per SAS port. There are three low profile models: the Adaptec RAID 6405 (4 internal ports), the Adaptec RAID 6445 (4 internal and 4 external ports), and the one we tested, the Adaptec RAID 6805 with eight internal ports, costing about $460.

All models support JBOD and all levels of RAID - 0, 1, 1E, 5, 5EE, 6, 10, 50 and 60.

Connected to the system via x8 interface PCI Express 2.0, Adaptec RAID 6805 supports up to 256 devices via a SAS expander. According to the manufacturer's specifications, the stable data transfer rate to the system can reach 2 GB / s, and the peak can reach 4.8 GB / s on the aggregated SAS port and 4 GB / s on the PCI Express interface - the last digit is the maximum theoretically possible value for PCI Express 2.0x bus.

ZMCP without the need for support

Our test unit came with an Adaptec Falsh Module 600 that uses Zero Maintenance Cache Protection (ZMCP) and does not use the legacy Battery Backup Unit (BBU). The ZMCP module is a 4 GB NAND flash chip unit that is used to Reserve copy controller cache in the event of a power outage.

Because copying from cache to flash is very fast, Adaptec uses capacitors to support power rather than batteries. Capacitors have the advantage that they can last as long as the cards themselves, while backup batteries need to be replaced every few years. In addition, once copied to flash memory, data can be stored there for several years. In comparison, you usually have about three days to store data before the cached information is lost, which forces you to rush to recover data. As the name suggests, ZMCP is a solution that can withstand power failures.


Performance

The Adaptec RAID 6805 in RAID 0 loses out in our streaming read/write tests. Also, RAID 0 is not the typical case for a business that needs data protection (although it might well be used for a video rendering workstation). Sequential reading goes at 640 MB / s, and sequential write - at 680 MB / s. On these two counts, the LSI MegaRAID 9265-8i takes the top spot in our tests. The Adaptec RAID 6805 performs better in the RAID 5, 6 and 10 tests, but is not the absolute leader. In an SSD-only configuration, the Adaptec controller runs at speeds up to 530 MB/s, but is outperformed by the Areca and LSI controllers.

The Adaptec card automatically recognizes what it calls a HybridRaid configuration, which consists of a mix of HDDs and SSDs, offering RAID levels 1 to 10 in this configuration. This card outperforms its competitors thanks to special read/write algorithms. They automatically route reads to the SSD and writes to both the hard drives and the SSD. Thus, read operations will work as in an SSD-only system, and write operations will work no worse than in a system from hard drives.

However, the results of our tests do not reflect the theoretical situation. Except for the benchmarks for the Web server, where the data rate for the hybrid system works, the hybrid SSD system and hard drives can't come close to the speed of a system just from an SSD.

The Adaptec controller performs much better in the HDD I/O performance test. Regardless of the type of benchmarks (database, file server, web server or workstation), the RAID 6805 controller keeps pace with Areca ARC-1880i and LSI MegaRAID 9265-8i, and takes first or second place. Only the HighPoint RocketRAID 2720SGL leads the I/O test. If you replace hard drives with SSDs, the LSI MegaRAID 9265-8i outperforms the other three controllers significantly.

Software installation and RAID setup

Adaptec and LSI have well-organized and easy-to-use RAID management tools. Management tools allow administrators to get remote access to controllers over the network.

Installing an array

Areca ARC-188oi

Areca is also bringing the ARC-1880 series into the 6Gb/s SAS RAID controller market. Target applications range from NAS applications and storage servers to HPC, redundancy, security and cloud computing, according to the manufacturer.

Tested ARC-1880i samples with eight external SAS ports and eight PCI Express 2.0 lanes can be purchased for $580. The low-profile card, which is the only card in our set with an active cooler, is built around an 800MHz ROC with 512MB DDR2-800 data cache support. Using SAS expanders, Areca ARC-1880i supports up to 128 storage systems. In order to preserve the contents of the cache during a power failure, a battery pack can optionally be added to the system.

In addition to single mode and JBOD, the controller supports RAID levels 0, 1, 1E, 3, 5, 6, 10, 30, 50, and 60.

Performance

The Areca ARC-1880i performs well in RAID 0 read/write tests, reaching 960 MB/s read and 900 MB/s write. Only the LSI MegaRAID 9265-8i is faster in this particular test. The Areca controller does not disappoint in other benchmarks either. Both in working with hard drives and SSDs, this controller always actively competes with the test winners. Although the Areca controller was the leader in only one benchmark (sequential read in RAID 10), it showed very high results, for example, a read speed of 793 MB / s, while the fastest competitor, LSI MegaRAID 9265-8i, showed only 572 MB/s

However, serial communication is only one part of the picture. The second is I/O performance. Areca ARC-1880i excels here as well, competing on equal terms with Adaptec RAID 6805 and LSI MegaRAID 9265-8i. Similar to its victory in the data transfer rate benchmark, the Areca controller also won in one of the I / O tests - the Web server benchmark. The Areca controller dominates the Web Server benchmark at RAID 0, 5, and 6, while the Adaptec 6805 takes the lead in RAID 10, leaving the Areca controller in second place with a slight lag.

Web GUI and setting options

Like the HighPoint RocketRAID 2720SGL, the Areca ARC-1880i is conveniently web-based and easy to set up.

Installing an array

HighPoint RocketRAID 2720SGL

The HighPoint RocketRAID 2720SGL is a SAS RAID controller with eight internal SATA/SAS ports, each supporting 6Gb/s. According to the manufacturer, this low-profile card is aimed at storage systems for small and medium businesses and workstations. The key component of the card is the Marvell 9485 RAID controller. The main competitive advantages are its small size and 8-lane PCIe 2.0 interface.

In addition to JBOD, the card supports RAID 0, 1, 5, 6, 10, and 50.

In addition to the model that was tested in our tests, there are 4 more models in the low-profile HighPoint 2700 series: RocketRAID 2710, RocketRAID 2711, RocketRAID 2721 and RocketRAID 2722, which mainly differ in the types of ports (internal / external) and their number ( 4 to 8). Our tests used the cheapest of these RAID controllers, the RocketRAID 2720SGL ($170). All cables to the controller are purchased separately.

Performance

When sequentially reading/writing to a RAID 0 array of eight Fujitsu MBA3147RC drives, the HighPoint RocketRAID 2720SGL achieves an excellent read speed of 971 MB/s, second only to the LSI MegaRAID 9265-8i. The write speed of 697 MB/s is not as fast, but it still beats the write speed of the Adaptec RAID 6805. The RocketRAID 2720SGL also shows a wide range of results. With RAID 5 and 6 it outperforms other cards, but with RAID 10 the read speed drops to 485 MB/s, the lowest of the four samples tested. Sequential write speed in RAID 10 is even worse - only 198 MB / s.

This controller is clearly not made for SSD. The read speed here reaches 332 MB / s, and the write speed is 273 MB / s. Even the Adaptec RAID 6805, which is also not very good with SSDs, shows twice top scores. Therefore, HighPoint is not a competitor for two cards that work really well with SSDs: Areca ARC-1880i and LSI MegaRAID 9265-8i - they work at least three times faster.

Everything that we could say good things about the operation of HighPoint in I / O mode, we said. However, the RocketRAID 2720SGL ranks last in our tests across all four Iometer benchmarks. The HighPoint controller is quite competitive with other cards when working with the Web server benchmark, but loses significantly to competitors in the other three benchmarks. This becomes apparent in the SSD tests, where the RocketRAID 2720SGL clearly shows that it is not optimized for SSDs. It clearly doesn't take full advantage of SSDs over HDDs. For example, the RocketRAID 2720SGL scores 17378 IOPs in the database benchmark, while the LSI MegaRAID 9265-8i outperforms it four times with 75,037 IOPs.

Web GUI and array settings

The RocketRAID 2720SGL web interface is convenient and easy to use. All RAID parameters are easily set.

Installing an array

LSI MegaRAID 9265-8i

LSI is positioning the MegaRAID 9265-8i as a device for the SMB market. This card is suitable for cloud reliability and other business applications. The MegaRAID 9265-8i is one of the more expensive controllers in our test (it costs $630), but as the test shows, this money is paid for its real benefits. Before we present the test results, let's discuss technical features these controllers and software applications FastPath and CacheCade.

The LSI MegaRAID 9265-8i uses a dual-core LSI SAS2208 ROC using an eight-lane PCIe 2.0 interface. The 8 at the end of the device name indicates that there are eight internal SATA/SAS ports, each supporting 6 Gb/s. Up to 128 storage devices can be connected to the controller via SAS expanders. The LSI card contains 1 GB of DDR3-1333 cache and supports RAID levels 0, 1, 5, 6, 10 and 60.

Configuring Software and RAID, FastPath and CacheCade

LSI claims that FastPath can significantly speed up I/O systems when an SSD is connected. According to LSI experts, FastPath works with any SSD, significantly increasing the write/read performance of an SSD-based RAID array: 2.5x write and 2x read, reaching 465,000 IOPS. We have not been able to verify this figure. However, this card was able to get the most out of five SSDs without using FastPath.

The next application for the MegaRAID 9265-8i is called CacheCade. With it, you can use one SSD as cache memory for an array of hard drives. According to LSI experts, this can speed up the reading process by up to 50 times, depending on the size of the data in question, applications and method of use. We tested this application on a RAID 5 array consisting of 7 hard drives and one SSD (the SSD was used for cache). Compared to a RAID 5 system of 8 hard drives, it became clear that CacheCade not only improves I/O speed, but also overall performance(the more, the smaller the amount of constantly used data). For testing, we used 25 GB of data and got 3877 IOPS on Iometer in the Web server template, while a regular hard drive array only allowed 894 IOPS.

Performance

In the end, it turns out that the LSI MegaRAID 9265-8i is the fastest I/O out of all the SAS RAID controllers in this review. However, during sequential read/write operations, the controller exhibits average performance, since its sequential performance is highly dependent on the RAID level you are using. When testing the hard drive at the RAID 0 level, we get a sequential read speed of 1080 MB / s (which is significantly higher than the competition). Sequential write speeds at RAID 0 come in at 927 MB/s, which is also faster than the competition. But for RAID 5 and 6, LSI controllers are inferior to all their competitors, surpassing them only in RAID 10. In the SSD RAID test, LSI MegaRAID 9265-8i demonstrates the best sequential write performance (752 MB / s) and only Areca ARC-1880i surpasses it according to the parameters of sequential reading.

If you're looking for an SSD-focused RAID controller with high I/O performance, the LSI controller is the leader here. With few exceptions, it takes first place in our file server, web server, and workstation I/O tests. When your RAID array consists of SSDs, LSI's competitors can't match it. For example, in the benchmark for workstations MegaRAID 9265-8i reaches 70,172 IOPS, while Areca ARC-1880i, which is in second place, is almost two times behind it - 36,975 IOPS.

RAID Software and Array Installation

As with Adaptec, LSI has convenient tools for managing the RAID array through the controller. Here are some screenshots:

Software for CacheCade

RAID software

Installing an array

Comparison table and test bench configuration

Manufacturer Adaptec Areca
Product RAID 6805 ARC-1880i
Form factor Low profile MD2 Low profile MD2
Number of SAS ports 8 8
6 Gbps (SAS 2.0) 6 Gbps (SAS 2.0)
Internal SAS ports 2xSFF-8087 2xSFF-8087
External SAS ports No No
Cache 512MB DDR2-667 512MB DDR2-800
Main interface PCIe 2.0 (x8) PCIe 2.0 (x8)
XOR and clock speed PMC-Sierra PM8013/No data N/A/800 MHz
Supported RAID levels 0, 1, 1E, 5, 5EE, 6, 10, 50, 60 0, 1, 1E, 3, 5, 6, 10, 30, 50, 60
Windows 7 Windows Server 2008/2008 R2, Windows Server 2003/2003 R2, Windows Vista, VMware ESX Classic 4.x (vSphere), Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), Sun Solaris 10 x86, FreeBSD, Debian Linux , Ubuntu Linux Windows 7/2008/Vista/XP/2003, Linux, FreeBSD, Solaris 10/11 x86/x86_64, Mac OS X 10.4.x/10.5.x/10.6.x, VMware 4.x
Battery No Optional
Fan No Eat

Manufacturer high point LSI
Product RocketRAID 2720SGL MegaRAID 9265-8i
Form factor Low profile MD2 Low profile MD2
Number of SAS ports 8 8
SAS bandwidth per port 6 Gbps (SAS 2.0) 6 Gbps (SAS 2.0)
Internal SAS ports 2xSFF-8087 2xSFF-8087
External SAS ports No No
Cache No data 1 GB DDR3-1333
Main interface PCIe 2.0 (x8) PCIe 2.0 (x8)
XOR and clock speed Marvel 9485/No data LSI SAS2208/800 MHz
Supported RAID levels 0, 1, 5, 6, 10, 50 0, 1, 5, 6, 10, 60
Supported operating systems Windows 2000, XP, 2003, 2008, Vista, 7, RHEL/CentOS, SLES, OpenSuSE, Fedora Core, Debian, Ubuntu, FreeBSD bis 7.2 Microsoft Windows Vista/2008/Server 2003/2000/XP, Linux, Solaris (x86), Netware, FreeBSD, Vmware
Battery No Optional
Fan No No

Test configuration

We connected eight Fujitsu MBA3147RC SAS hard drives (each 147 GB) with RAID controllers and ran benchmarks for RAID levels 0, 5, 6 and 10. SSD tests were carried out with five Samsung SS1605 drives.

Hardware
CPU Intel Core i7-920 (Bloomfield) 45 nm, 2.66 GHz, 8 MB shared L3 cache
Motherboard (LGA 1366) Supermicro X8SAX, Revision: 1.0, Chipset Intel X58 + ICH10R, BIOS: 1.0B
Controller LSI MegaRAID 9280-24i4e
Firmware: v12.12.0-0037
Driver: v4.32.0.64
RAM 3 x 1 GB DDR3-1333 Corsair CM3X1024-1333C9DHX
HDD Seagate NL35 400 GB, ST3400832NS, 7200 rpm, SATA 1.5 Gb/s, 8 MB cache
power unit OCZ EliteXstream 800W, OCZ800EXS-EU
Benchmarks
Performance Crystal Disk Mark 3
I/O performance Iometer 2006.07.27
File Server Benchmark
Web Server Benchmark
Database Benchmark
Workstation Benchmark
Streaming Reads
Streaming Writes
4k Random Reads
4k Random Writes
Software and drivers
operating system Windows 7 Ultimate

Test results

I/O performance in RAID 0 and 5

The RAID 0 benchmarks show no significant difference between the RAID controllers, with the exception of the HighPoint RocketRAID 2720SGL.




The benchmark in RAID 5 does not help the HighPoint controller regain its lost ground. Unlike the benchmark in RAID 0, all three faster controllers show their strengths and weaknesses more clearly here.




I/O performance in RAID 6 and 10

LSI has optimized its MegaRAID 9265 controller for database, file server and workstation workloads. The benchmark for the Web server passes all controllers well, demonstrating the same performance.




In the RAID 10 variant, Adaptec and LSI are vying for the top spot, with the HighPoint RocketRAID 2720SGL in last place.




SSD I/O performance

The LSI MegaRAID 9265 leads the way here, taking full advantage of solid-state storage systems.




Bandwidth in RAID 0, 5 and degraded RAID 5

The LSI MegaRAID 9265 easily leads this benchmark. The Adaptec RAID 6805 is far behind.


The HighPoint RocketRAID 2720SGL without cache does a good job of sequential operations in RAID 5. Other controllers are not much inferior to it either.


Degraded RAID 5


Bandwidth in RAID 6, 10 and degraded RAID 6

As with RAID 5, the HighPoint RocketRAID 2720SGL demonstrates the highest throughput for RAID 6, leaving the Areca ARC-1880i in second place. The impression is that the LSI MegaRAID 9265-8i simply does not like RAID 6.


Degraded RAID 6


Here, the LSI MeagaRAID 9265-8i shows itself in the best light, although it lets the Areca ARC-1880i go ahead.

LSI CacheCade




What is the best 6Gb/s SAS controller?

In general, all four SAS RAID controllers we tested performed well. All have all the necessary functionality, and all of them can be successfully used in entry-level and mid-level servers. In addition to outstanding performance, they also provide important features such as mixed SAS and SATA environments and scalability through SAS expanders. All four controllers support the SAS 2.0 standard, which increases throughput from 3 Gbps to 6 Gbps per port, and also introduces new features such as SAS zoning, which allows many controllers to access storage resources through a single SAS - expander.

Despite such similarities as a low-profile form factor, an eight-lane PCI Express interface, and eight SAS 2.0 ports, each controller has its own strengths and weaknesses, analyzing which you can make recommendations for their optimal use.

So, the fastest controller is the LSI MegaRAID 9265-8i, especially in terms of I/O bandwidth. Although it has some weaknesses, in particular, not very high performance in cases of RAID 5 and 6. MegaRAID 9265-8i leads in most benchmarks and is an excellent professional-level solution. The cost of this controller - $ 630 - is the highest, we should not forget about this either. But for this high cost, you get a great controller that outperforms its competitors, especially when working with an SSD. It also has excellent performance, which becomes especially valuable when connecting large storage systems. What's more, you can increase the performance of the LSI MegaRAID 9265-8i using FastPath or CacheCade, which of course will cost you extra.

The Adaptec RAID 6805 and Areca ARC-1880i controllers show the same performance and are very similar in price ($460 and $540). Both work well, as shown by various benchmarks. The Adaptec controller delivers slightly better performance than the Areca controller, and it also offers the much-requested ZMCP (Zero Maintenance Cache Protection) feature that replaces conventional power failure redundancy and allows operations to continue.

The HighPoint RocketRAID 2720SGL sells for just $170, which is much cheaper than the other three controllers we tested. The performance of this controller is quite sufficient if you work with conventional drives, although it is worse than the Adaptec or Areca controllers. And you should not use this controller to work with SSD.

Briefly about modern RAID controllers

Currently, RAID controllers as a separate solution are focused exclusively on a specialized server market segment. Indeed, all modern motherboards for user PCs (not server boards) have integrated hardware and software SATA RAID controllers, the capabilities of which are more than enough for PC users. True, you need to keep in mind that these controllers are focused solely on the use of the operating system. Windows systems. In operating systems of the Linux family, RAID arrays are created by software, and all calculations are transferred from the RAID controller to CPU.

Servers traditionally use either hardware-software or pure hardware RAID controllers. A hardware RAID controller allows you to create and maintain a RAID array without the participation of the operating system and the central processor. Such RAID arrays are seen by the operating system as a single disk (SCSI disk). In this case, no specialized driver is needed - the standard (part of the operating system) SCSI disk driver is used. In this regard, hardware controllers are platform-independent, and the RAID array is configured through the BIOS of the controller. The hardware RAID controller does not use the CPU when calculating all checksums etc., since it uses its own specialized processor and RAM for calculations.

Hardware-software controllers require the mandatory presence of a specialized driver that replaces standard driver SCSI disk. In addition, software and hardware controllers are equipped with management utilities. In this regard, software and hardware controllers are tied to a specific operating system. All necessary calculations in this case are also performed by the processor of the RAID controller itself, but using the software driver and management utility allows you to control the controller through the operating system, and not just through the BIOS of the controller.

Given the fact that server SCSI disks have already been replaced by SAS disks, all modern server RAID controllers are focused on supporting either SAS or SATA disks, which are also used in servers.

Last year, drives with the new SATA 3 (SATA 6Gb/s) interface began to appear on the market, which began to gradually replace the SATA 2 (SATA 3Gb/s) interface. Well, disks with a SAS interface (3 Gb / s) have been replaced by disks with a SAS 2.0 interface (6 Gb / s). Naturally, new standard SAS 2.0 is fully compatible with the old standard.

Accordingly, RAID controllers with support for the SAS 2.0 standard appeared. It would seem that there is no point in switching to the SAS 2.0 standard if even the fastest SAS disks have a read and write speed of no more than 200 MB / s and the bandwidth of the SAS protocol (3 Gb / s or 300 MB / s) is quite enough for them. ?

Indeed, when each drive is connected to a separate port on the RAID controller, 3 Gb/s (which is 300 MB/s in theory) is sufficient. However, not only individual disks, but also disk arrays (disk cages) can be connected to each port of the RAID controller. In this case, one SAS channel is shared by several drives at once, and a bandwidth of 3 Gb / s will no longer be enough. Well, in addition, you need to take into account the presence of SSD drives, the read and write speed of which has already overcome the bar of 300 MB / s. For example, the new Intel SSD 510 has up to 500MB/s sequential read speeds and up to 315MB/s sequential write speeds.

After a brief introduction to the current situation in the server RAID controller market, let's take a look at the specifications of the LSI 3ware SAS 9750-8i controller.

3ware SAS 9750-8i RAID Controller Specifications

This RAID controller is based on a specialized LSI SAS2108 XOR processor with a clock frequency of 800 MHz and PowerPC architecture. This processor uses 512 MB random access memory DDRII 800 MHz with error correction (ECC).

The LSI 3ware SAS 9750-8i controller is compatible with SATA and SAS drives (both HDDs and SSDs are supported) and allows you to connect up to 96 devices using SAS expanders. Importantly, this controller supports both SATA 600 MB/s (SATA III) and SAS 2 drives.

To connect disks, the controller has eight ports, which are physically combined into two Mini-SAS SFF-8087 connectors (four ports in each connector). That is, if disks are connected directly to ports, then a total of eight disks can be connected to the controller, and when connected to each port of disk cages, the total volume of disks can be increased to 96. Each of the eight ports of the controller has a bandwidth of 6 Gb / s, which corresponds to SAS 2 and SATA III standards.

Naturally, when connecting disks or disk cages to this controller, you will need specialized cables that have an internal Mini-SAS SFF-8087 connector at one end, and a connector at the other end, which depends on what exactly is connected to the controller. For example, when connecting SAS drives directly to the controller, you must use a cable that has a Mini-SAS SFF-8087 connector on one side and four SFF 8484 connectors on the other, which allow you to directly connect SAS drives. Note that the cables themselves are not included in the package and must be purchased separately.

The LSI 3ware SAS 9750-8i controller has a PCI Express 2.0 x8 interface, which provides a throughput of 64 Gb / s (32 Gb / s in each direction). It is clear that this throughput is quite enough for a fully loaded eight SAS ports with a bandwidth of 6 Gb / s each. Also note that the controller has a special connector, which can be optionally connected to the backup battery LSIiBBU07.

It is important that this controller requires the installation of a driver, that is, it is a software and hardware RAID controller. Supported operating systems include Windows Vista, Windows Server 2008, Windows Server 2003 x64, Windows 7, Windows 2003 Server, MAC OS X, LinuxFedora Core 11, Red Hat Enterprise Linux 5.4, OpenSuSE 11.1, SuSE Linux Enterprise Server (SLES ) 11, OpenSolaris 2009.06, VMware ESX/ESXi 4.0/4.0 update-1 and other Linux family systems. The package also includes 3ware Disk Manager 2 software, which allows you to manage RAID arrays through the operating system.

The LSI 3ware SAS 9750-8i controller supports the standard RAID types: RAID 0, 1, 5, 6, 10 and 50. Perhaps the only array type that is not supported is RAID 60. This is due to the fact that this controller is capable of create a RAID 6 array with only five drives connected directly to each controller port (theoretically, RAID 6 can be created with four drives). Accordingly, for a RAID 60 array, this controller requires at least ten disks, which simply do not exist.

It is clear that support for a RAID 1 array is irrelevant for such a controller, since given type an array is created on only two disks, and using such a controller for only two disks is illogical and extremely wasteful. But support for RAID 0, 5, 6, 10 and 50 arrays is very relevant. Although, perhaps, we hurried with the RAID 0 array. Still, this array does not have redundancy, and therefore does not provide reliable data storage, so it is rarely used in servers. However, theoretically, this array is the fastest in terms of data read and write speed. However, let's remember what different types RAID arrays differ from each other and what they are.

RAID levels

The term "RAID array" appeared in 1987, when American researchers Patterson, Gibson and Katz from the University of California at Berkeley in their article "A case for redundant arrays of inexpensive discs, RAID") described how way you can combine several cheap hard drives into a single logical unit so that the result is increased system capacity and speed, and the failure of individual disks does not lead to the failure of the entire system. Almost 25 years have passed since the publication of this article, but the technology for building RAID arrays has not lost its relevance today. The only thing that has changed since then is the decoding of the acronym RAID. The fact is that initially RAID arrays were not built on cheap disks at all, so the word Inexpensive (“inexpensive”) was changed to Independent (“independent”), which was more true.

Fault tolerance in RAID arrays is achieved through redundancy, that is, part of the disk space capacity is allocated for service purposes, becoming inaccessible to the user.

Productivity increase disk subsystem It is provided by the simultaneous operation of several disks, and in this sense, the more disks in the array (up to a certain limit), the better.

Drives in an array can be shared using either parallel or independent access. With parallel access, disk space is divided into blocks (stripes) for data recording. Similarly, information to be written to disk is divided into the same blocks. When writing, individual blocks are written to different disks, and multiple blocks are written to different disks at the same time, which leads to an increase in performance in write operations. Necessary information is also read in separate blocks simultaneously from several disks, which also contributes to performance growth in proportion to the number of disks in the array.

It should be noted that the parallel access model is implemented only under the condition that the size of the data write request is larger than the size of the block itself. Otherwise, it is practically impossible to write several blocks in parallel. Imagine a situation where the size of a single block is 8 KB, and the size of a data write request is 64 KB. In this case, the source information is cut into eight blocks of 8 KB each. If there is an array of four disks, then four blocks, or 32 KB, can be written at the same time at a time. Obviously, in this example, the write speed and read speed will be four times higher than when using a single disk. This is true only for an ideal situation, however, the request size is not always a multiple of the block size and the number of disks in the array.

If the size of the recorded data is less than the block size, then a fundamentally different model is implemented - independent access. Moreover, this model can also be used when the size of the data to be written is larger than the size of one block. With independent access, all data of a particular request is written to a separate disk, that is, the situation is identical to working with a single disk. The advantage of the independent access model is that if multiple write (read) requests arrive at the same time, they will all be executed on separate disks independently of each other. This situation is typical, for example, for servers.

In accordance with various types access, there are different types of RAID arrays, which are usually characterized by RAID levels. In addition to the type of access, RAID levels differ in the way in which redundant information is placed and formed. Redundant information can either be placed on a dedicated disk or distributed across all disks.

Currently, there are several RAID levels that are widely used, they are RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60. Previously, RAID 2, RAID 3 and RAID 4 were also used, however These RAID levels are not currently used and modern RAID controllers do not support them. Note that all modern RAID controllers also support the JBOD (Just a Bench Of Disks) function. In this case, we are not talking about a RAID array, but simply about connecting individual disks to a RAID controller.

RAID 0

RAID 0, or striping, is not strictly speaking a RAID array, since such an array does not have redundancy and does not provide data storage reliability. However, historically it is also called a RAID array. A RAID 0 array (Fig. 1) can be built on two or more disks and is used when it is necessary to ensure high performance of the disk subsystem, and data storage reliability is not critical. When creating a RAID 0 array, information is divided into blocks (these blocks are called stripes (stripe)), which are simultaneously written to separate disks, that is, a system with parallel access is created (if, of course, the block size allows). With the ability to allow simultaneous I/O from multiple drives, RAID 0 provides the fastest data transfer speeds and the most efficient use of disk space because no space is required to store the checksums. The implementation of this level is very simple. RAID 0 is mainly used in areas where fast transfer of large amounts of data is required.

Rice. 1. RAID 0 array

Theoretically, the increase in read and write speed should be a multiple of the number of disks in the array.

The reliability of a RAID 0 array is obviously lower than the reliability of any of the disks separately and decreases with an increase in the number of disks included in the array, since the failure of any of them leads to the inoperability of the entire array. If the MTBF of each disk is MTTF disk , then the MTBF of a RAID 0 array consisting of n disks, is equal to:

MTTF RAID0 = MTTD disk /n.

If we denote the probability of failure for a certain period of time of one disk through p, then for a RAID 0 array from n disks, the probability that at least one disk will fail (probability of the array falling) will be:

P (array fall) = 1 - (1 - p) n.

For example, if the probability of failure of one disk within three years of operation is 5%, then the probability of failure of a RAID 0 array of two disks is already 9.75%, and of eight disks - 33.7%.

RAID 1

A RAID 1 array (Figure 2), also known as a mirror, is a two-disk array with 100 percent redundancy. That is, the data is completely duplicated (mirrored), due to which a very high level of reliability (as well as cost) is achieved. Note that the implementation of RAID 1 does not require prior partitioning of disks and data into blocks. In the simplest case, two drives contain the same information and are one logical drive. When one disk fails, another one performs its functions (which is absolutely transparent to the user). Restoring an array is done by simple copying. In addition, in a RAID 1 array, the read speed should theoretically double, since this operation can be performed simultaneously from two disks. Such a scheme for storing information is used mainly in cases where the price of data security is much higher than the cost of implementing a storage system.

Rice. 2. RAID 1

If, as in the previous case, we denote the probability of failure for a certain period of time of one disk as p, then for a RAID 1 array, the probability that both disks fail at the same time (the probability of an array failure) will be:

p(array drop) = p 2.

For example, if the probability of failure of one disk within three years of operation is 5%, then the probability of simultaneous failure of two disks is already 0.25%.

RAID 5

The RAID 5 array (Figure 3) is a fault-tolerant disk array with distributed checksum storage. When writing, the data stream is divided into blocks (stripes) at the byte level, which are simultaneously written to all disks in the array in a cyclic order.

Rice. 3. RAID 5 array

Suppose the array contains n disks, and the stripe size is d. For each portion of n–1 stripes checksum is calculated p.

Stripe d1 recorded on the first disc, stripe d2- on the second and so on up to the stripe d n–1, which is written to the (n–1)th disk. Next on nth disk checksum is written p n, and the process is repeated cyclically from the first disk on which the stripe is written d n.

Recording process ( n–1) stripes and their checksum is produced simultaneously for all n disks.

To calculate the checksum, a bitwise XOR operation is used on the data blocks being written. Yes, if there is n hard drives and d- data block (stripe), then the checksum is calculated by the following formula:

p n = d 1d2 ⊕ ... d n-1 .

In the event of a failure of any disk, the data on it can be recovered from the control data and from the data remaining on healthy disks. Indeed, using the identities (ab) A b= a And aa = 0 , we get that:

p n⊕ (d kp n) = dld n⊕ ...⊕ ...⊕ d n–l⊕ (d kpn).

d k = d 1d n⊕ ...⊕ dk–1dk+1⊕ ...⊕ p n.

Thus, if a disk with a block fails d k, then it can be restored by the value of the remaining blocks and the checksum.

In the case of RAID 5, all disks in the array must be the same size, however, the total capacity of the disk subsystem available for writing is reduced by exactly one disk. For example, if five disks are 100 GB, then the actual size of the array is 400 GB because 100 GB is allotted for parity information.

A RAID 5 array can be built on three or more hard drives. As the number of hard drives in an array increases, redundancy decreases. Note also that a RAID 5 array can be rebuilt if only one drive fails. However, if two drives fail at the same time (or if the second drive fails while the array is being rebuilt), then the array cannot be recovered.

RAID 6

A RAID 5 array has been shown to be recoverable if one drive fails. However, sometimes you need to provide a higher level of reliability than in a RAID 5 array. In this case, you can use a RAID 6 array (Figure 4), which allows you to restore the array even if two disks fail at the same time.

Rice. 4.RAID 6 array

RAID 6 is similar to RAID 5 but uses not one, but two checksums that are distributed cyclically across the disks. First checksum p is calculated according to the same algorithm as in a RAID 5 array, that is, it is an XOR operation between data blocks written to different disks:

p n = d 1d2⊕ ...⊕ d n–1.

The second checksum is calculated using a different algorithm. Without going into mathematical details, let's say that this is also an XOR operation between data blocks, but each data block is pre-multiplied by a polynomial factor:

q n = g 1 d 1g 2 d 2⊕ ...⊕ g n–1 d n–1 .

Accordingly, the capacity of two disks in the array is allocated for checksums. Theoretically, a RAID 6 array can be created on four or more drives, however, in many controllers, it can be created on at least five drives.

Keep in mind that the performance of a RAID 6 array is usually 10-15% lower than the performance of a RAID 5 array (with an equal number of drives), which is caused by a large amount of calculations performed by the controller (it is necessary to calculate the second checksum, as well as read and overwrite more disk blocks when each block is written).

RAID 10

A RAID 10 array (Figure 5) is a combination of levels 0 and 1. The minimum requirement for this level is four drives. In a RAID 10 array of four drives, they are combined in pairs to form RAID 1 arrays, and both of these arrays are logical drives are combined into a RAID 0 array. Another approach is also possible: initially, the disks are combined into RAID 0 arrays, and then logical disks based on these arrays into a RAID 1 array.

Rice. 5. RAID 10 array

RAID 50

A RAID 50 array is a combination of levels 0 and 5 (Figure 6). The minimum requirement for this level is six disks. In a RAID 50 array, two RAID 5 arrays are first created (at least three disks each), which are then combined as logical disks into a RAID 0 array.

Rice. 6.RAID 50 array

LSI 3ware SAS 9750-8i Controller Test Methodology

To test the LSI 3ware SAS 9750-8i RAID controller, we used a specialized test package IOmeter 1.1.0 (version from 2010.12.02). The test bench had the following configuration:

  • processor - Intel Core i7-990 (Gulftown);
  • motherboard - GIGABYTE GA-EX58-UD4;
  • memory - DDR3-1066 (3 GB, three-channel mode);
  • system disk- WD Caviar SE16 WD3200AAKS;
  • video card - GIGABYTE GeForce GTX480 SOC;
  • RAID controller - LSI 3ware SAS 9750-8i;
  • The SAS drives connected to the RAID controller are Seagate Cheetah 15K.7 ST3300657SS.

Testing was carried out under the control of the operating room Microsoft systems Windows 7 Ultimate (32-bit).

We used the RAID controller Windows driver version 5.12.00.007 and also updated the controller firmware to version 5.12.00.007.

The system drive was connected to SATA, implemented through a controller integrated into the south bridge Intel chipset X58, and SAS drives were connected directly to the ports of the RAID controller using two Mini-SAS SFF-8087 ->4 SAS cables.

The RAID controller was installed in the PCI Express x8 slot on the system board.

The controller has been tested with the following RAID arrays: RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, and RAID 50. The number of drives that can be combined into a RAID array varied from a minimum to eight for each array type.

The stripe size on all RAID arrays did not change and amounted to 256 KB.

Recall that the IOmeter package allows you to work both with disks on which a logical partition is created, and with disks without a logical partition. If a disk is being tested without a logical partition created on it, then IOmeter works at the level of logical data blocks, that is, instead of the operating system, it sends commands to the controller to write or read LBA blocks.

If a logical partition is created on the disk, then initially the IOmeter utility creates a file on the disk that occupies the entire logical partition by default (in principle, the size of this file can be changed by specifying it in the number of 512 byte sectors), and then it already works with this file, that is, reads or writes (overwrites) individual LBA blocks within this file. But again, IOmeter works bypassing the operating system, that is, it directly sends requests to the controller to read / write data.

In general, when testing HDD disks, as practice shows, there is practically no difference between the results of testing a disk with and without a created logical partition. At the same time, we believe that it is more correct to test without a created logical partition, since in this case the test results do not depend on the used file system(NTFA, FAT, ext, etc.). That is why we performed testing without creating logical partitions.

In addition, the IOmeter utility allows you to set the size of the request block (Transfer Request Size) for writing / reading data, and the test can be carried out both for sequential (Sequential) reading and writing, when LBA blocks are read and written sequentially one after another, and for random (Random), when LBA blocks are read and written in random order. When generating a load scenario, you can set the test time, the percentage ratio between sequential and random operations (Percent Random/Sequential Distribution), as well as the percentage ratio between read and write operations (Percent Read/Write Distribution). In addition, the IOmeter utility allows you to automate the entire testing process and saves all results in a CSV file, which is then easily exported to an Excel spreadsheet.

Another setting that the IOmeter utility allows you to do is the so-called alignment of blocks of data transfer requests (Align I / Os on) along the boundaries hard disk. By default, IOmeter aligns request blocks on 512-byte disk sector boundaries, but you can also set arbitrary alignment. Actually, most hard drives have a sector size of 512 bytes, and only recently have disks with a sector size of 4 KB begun to appear. Recall that in HDDs, a sector is the minimum addressable size of data that can be written to or read from a disk.

When testing, it is necessary to set the alignment of blocks of requests for data transfer according to the size of the disk sector. Since the Seagate Cheetah 15K.7 ST3300657SS drives have a sector size of 512 bytes, we used 512-byte sector boundary alignment.

Using the IOmeter test package, we measured the sequential read and write speed, as well as the random read and write speed of the created RAID array. The sizes of the transmitted data blocks were 512 bytes, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024 KB.

In the above load scenarios, the test time with each request to transfer a data block was 5 minutes. Also note that in all of the above tests, we set the task queue depth (# of Outstanding I/Os) to 4 in the IOmeter settings, which is typical for user applications.

Test results

After analyzing the test results, we were surprised by the performance of the LSI 3ware SAS 9750-8i RAID controller. And so much so that they began to look through our scripts in order to identify errors in them, and then repeatedly repeated testing with other RAID controller settings. We changed the stripe size and the RAID controller cache mode. This, of course, was reflected in the results, but did not change general dependence of the data transfer rate on the size of the data block. And we just could not explain this dependence. The operation of this controller seems to us completely illogical. Firstly, the results are unstable, that is, for each fixed data block size, the speed changes periodically and the average result has a large error. Note that usually the results of testing disks and controllers using the IOmeter utility are stable and differ very little.

Secondly, as the block size increases, the data rate must increase or remain unchanged in saturation mode (when the rate reaches its maximum value). However, in the case of the LSI 3ware SAS 9750-8i controller, there is a sharp drop in data transfer rate with some block sizes. In addition, it remains a mystery to us why, with the same number of disks for RAID 5 and RAID 6 arrays, the write speed is higher than the read speed. In a word, we cannot explain the operation of the LSI 3ware SAS 9750-8i controller - we can only state the facts.

Test results can be classified in different ways. For example, by boot scenarios, when for each type of boot the results are given for all possible RAID arrays with a different number of connected disks, or by RAID array types, when for each type of RAID array the results are indicated with a different number of disks in sequential read scenarios , sequential write, random read and random write. You can also classify the results by the number of drives in the array, when for each number of drives connected to the controller, the results are given for all possible (for a given number of drives) RAID arrays in sequential read and sequential write, random read and random write scenarios.

We decided to classify the results by array types, because, in our opinion, despite the rather large number of graphs, such a presentation is more visual.

RAID 0

A RAID 0 array can be created with two to eight drives. The test results for the RAID 0 array are shown in fig. 7-15.

Rice. 7. Sequential read and write speed
with eight drives in a RAID 0 array

Rice. 8. Sequential read and write speed
with seven drives in a RAID 0 array

Rice. 9. Sequential read speed
and recording with six drives in a RAID 0 array

Rice. 10. Sequential read and write speed
with five drives in a RAID 0 array

Rice. 11. Sequential read and write speed
with four drives in a RAID 0 array

Rice. 12. Sequential read and write speed
with three drives in a RAID 0 array

Rice. 13. Sequential read and write speed
with two drives in a RAID 0 array

Rice. 14. Random Read Speed
in a RAID 0 array

Rice. 15. Random Write Speed ​​in RAID 0

It is clear that the highest sequential read and write speed in a RAID 0 array is achieved with eight drives. It is worth paying attention to the fact that with eight and seven drives in a RAID 0 array, the sequential read and write speeds are almost the same as each other, and with fewer drives, the sequential write speed becomes higher than the read speed.

It is impossible not to note the characteristic dips in the speed of sequential reading and writing for certain block sizes. For example, with eight and six disks in the array, such gaps are observed with a data block size of 1 and 64 KB, and with seven disks - with a size of 1, 2, and 128 KB. Similar failures, but with other sizes of data blocks, are also present with four, three, and two disks in the array.

In terms of sequential read and write performance (averaged over all block sizes), a RAID 0 array outperforms all other possible arrays in eight, seven, six, five, four, three, and two drive configurations.

Random access in a RAID 0 array is also quite interesting. The random read speed for each data block size is proportional to the number of disks in the array, which is quite logical. Moreover, with a block size of 512 KB, for any number of disks in the array, there is a characteristic dip in random reading speed.

With random writes for any number of disks in the array, the speed increases with the size of the data block and there are no drops in speed. At the same time, it should be noted that the highest speed in this case is achieved not with eight, but with seven disks in the array. Next in terms of random write speed is an array of six disks, then five, and only then eight disks. Moreover, in terms of random write speed, an array of eight disks is almost identical to an array of four disks.

The random write performance of a RAID 0 array outperforms all other arrays available in eight, seven, six, five, four, three, and two drive configurations. However, in terms of random read speed in an eight-drive configuration, RAID 0 is inferior to RAID 10 and RAID 50 arrays, but in a configuration with fewer drives, RAID 0 leads in random read speed.

RAID 5

A RAID 5 array can be created with three to eight drives. The test results for a RAID 5 array are shown in fig. 16-23.

Rice. 16. Sequential read and write speed
with eight drives in a RAID 5 array

Rice. 17. Sequential read and write speed
with seven drives in a RAID 5 array

Rice. 18. Sequential read and write speed
with six drives in a RAID 5 array

Rice. 19. Sequential read and write speed
with five drives in a RAID 5 array

Rice. 20. Sequential read and write speed
with four drives in a RAID 5 array

Rice. 21. Sequential read and write speed
with three drives in a RAID 5 array

Rice. 22. Random Read Speed
in a RAID 5 array

Rice. 23. Random write speed
in a RAID 5 array

It is clear that the highest read and write speed is achieved with eight disks. It is worth noting that for a RAID 5 array, the sequential write speed is on average higher than the read speed. However, for a certain request size, sequential read speeds can exceed sequential write speeds.

It is impossible not to note the characteristic dips in the speed of sequential reading and writing for certain block sizes for any number of disks in the array.

In an eight-drive configuration, RAID 5 is slower than RAID 0 and RAID 50 in sequential read and write speeds, but outperforms RAID 10 and RAID 6. In seven-drive configurations, RAID 5 is slower than RAID 0 and RAID 0 in sequential read and write speeds. outperforms a RAID 6 array (other types of arrays are not possible with a given number of drives).

In six-drive configurations, RAID 5 is only as sequentially read as RAID 0 and RAID 50, and as sequentially writes only as fast as RAID 0.

In five, four, and three drive configurations, RAID 5 is second only to RAID 0 in sequential read and write speeds.

Random access in a RAID 5 array is similar to random access in a RAID 0 array. Thus, the random read speed at each data block size is proportional to the number of disks in the array, and at a block size of 512 KB, for any number of disks in the array, there is a characteristic dip in random read speed. Moreover, it should be noted that the random read speed slightly depends on the number of disks in the array, that is, it is approximately the same for any number of disks.

In terms of random read speed, the RAID 5 array in eight, seven, six, four, and three drive configurations is inferior to all other arrays. And only in a five-drive configuration is it slightly ahead of a RAID 6 array.

In terms of random write speed, an eight-drive RAID 5 array is second only to RAID 0 and RAID 50 arrays, and a seven-drive, five-drive, four-drive, and three-drive configuration is second only to a RAID 0 array.

In a six-drive configuration, RAID 5 is less than RAID 0, RAID 50, and RAID 10 in terms of random write performance.

RAID 6

The LSI 3ware SAS 9750-8i controller allows you to create a RAID 6 array with five to eight drives. The test results for a RAID 6 array are shown in fig. 24-29.

Rice. 24. Sequential read and write speed
with eight drives in a RAID 6 array

Rice. 25. Sequential read and write speed
with seven drives in a RAID 6 array

We also note the characteristic dips in sequential read and write speeds for certain block sizes for any number of disks in the array.

In terms of sequential read speed, the RAID 6 array is inferior to all other arrays in configurations with any (from eight to five) number of drives.

In terms of sequential write speed, the situation is somewhat better. In an eight-drive configuration, RAID 6 outperforms a RAID 10 array, and in a six-drive configuration, it outperforms both RAID 10 and RAID 50 arrays. in last place in sequential write speed.

Random access in a RAID 6 array is similar to random access in RAID 0 and RAID 5 arrays. Thus, the random read speed with a block size of 512 KB for any number of disks in the array has a characteristic dip in random read speed. Note that maximum speed random read is achieved with six drives in the array. Well, with seven and eight disks, the speed of random reading is almost the same.

With random writes for any number of disks in the array, the speed increases with the size of the data block and there are no drops in speed. In addition, although the random write speed is proportional to the number of disks in the array, the difference in speed is negligible.

In terms of random read speed, the RAID 6 array in the eight- and seven-drive configuration is only ahead of the RAID 5 array and inferior to all other possible arrays.

In a six-drive configuration, RAID 6 is less than RAID 10 and RAID 50 in random read performance, and in a five-drive configuration, RAID 0 and RAID 5.

In terms of random write speed, a RAID 6 array with any number of connected disks is inferior to all other possible arrays.

In general, we can state that the RAID 6 array is inferior in performance to the RAID 0, RAID 5, RAID 50 and RAID 10 arrays. That is, in terms of performance, this type of array was in last place.

Rice. 33. Random Read Speed
in a RAID 10 array

Rice. 34. Random Write Speed ​​in RAID 10

Characteristically, in arrays of eight and six disks, the sequential read speed is higher than the write speed, and in an array of four disks, these speeds are almost the same for any data block size.

For a RAID 10 array, as well as for all other arrays considered, a drop in sequential read and write speeds is characteristic for certain sizes of data blocks for any number of disks in the array.

With random writes for any number of disks in the array, the speed increases with the size of the data block and there are no drops in speed. Also, the random write speed is proportional to the number of disks in the array.

In terms of sequential read speed, RAID 10 follows RAID 0, RAID 50, and RAID 5 arrays in eight, six, and four drive configurations, and in terms of sequential write speed, it is inferior even to RAID 6, that is, it follows RAID 0 arrays, RAID 50, RAID 5 and RAID 6.

But in terms of random read speed, the RAID 10 array outperforms all other arrays in eight, six, and four drive configurations. But in terms of random write speed, this array loses to RAID 0, RAID 50 and RAID 5 arrays in an eight-drive configuration, RAID 0 and RAID 50 arrays in a six-drive configuration, and RAID 0 and RAID 5 arrays in a four-drive configuration.

RAID 50

A RAID 50 array can be built on six or eight drives. The test results for a RAID 50 array are shown in fig. 35-38.

In the random read scenario, as well as for all other considered arrays, there is a characteristic performance dip at a block size of 512 KB.

With random writes for any number of disks in the array, the speed increases with the size of the data block and there are no drops in speed. In addition, the random write speed is proportional to the number of disks in the array, but the difference in speed is insignificant and is observed only with a large (more than 256 KB) data block size.

In terms of sequential read speed, the RAID 50 array is second only to the RAID 0 array (in eight and six drive configurations). In terms of sequential write speed, RAID 50 is also second only to RAID 0 in an eight-drive configuration, and in a six-drive configuration it loses to RAID 0, RAID 5, and RAID 6.

But in terms of random read and write speed, the RAID 50 array is second only to the RAID 0 array and is ahead of all other arrays possible with eight and six disks.

RAID 1

As we have already noted, a RAID 1 array, which can be built on only two disks, is impractical to use on such a controller. However, for the sake of completeness, we present the results for a RAID 1 array on two drives. The test results for a RAID 1 array are shown in fig. 39 and 40.

Rice. 39. Sequential write and read speed in a RAID 1 array

Rice. 40. Random write and read speed in a RAID 1 array

For a RAID 10 array, as well as for all other arrays considered, a drop in sequential read and write speeds is characteristic for certain sizes of data blocks.

In the random read scenario, as with other arrays, there is a characteristic performance dip at a block size of 512 KB.

With random writes, the speed increases with the size of the data block and there are no dips in speed.

A RAID 1 array can only be mapped to a RAID 0 array (since no further arrays are possible in the case of two drives). It should be noted that a RAID 1 array loses performance to a RAID 0 array with two drives in all load scenarios, except for random reads.

conclusions

The impression of testing the LSI 3ware SAS 9750-8i controller in combination with Seagate Cheetah 15K.7 ST3300657SS SAS drives was rather ambiguous. On the one hand, he has excellent functionality, on the other hand, speed dips at certain sizes of data blocks are alarming, which, of course, affects the performance of RAID arrays when they operate in a real environment.

Little has changed over the past two years:

  • Supermicro is ditching the proprietary "flipped" UIO form factor for controllers. Details will be below.
  • LSI 2108 (SAS2 RAID with 512MB cache) and LSI 2008 (SAS2 HBA with optional RAID support) are still in service. Products based on these chips, both manufactured by LSI and from OEM partners, are well debugged and are still relevant.
  • LSI 2208 appeared (the same SAS2 RAID with LSI MegaRAID stack, only with a dual-core processor and 1024MB of cache) and (an improved version of LSI 2008 with more fast processor and PCI-E 3.0 support).

Transition from UIO to WIO

As you remember, UIO boards are ordinary PCI-E x8 boards, in which the entire element base is located with reverse side, i.e. when installed in the left riser, it is on top. This form factor was needed to install boards in the lowest slot of the server, which allowed four boards to be placed in the left riser. UIO is not only a form factor of expansion boards, it is also cases designed for installing risers, risers themselves and motherboards of a special form factor, with a cutout for the bottom expansion slot and slots for installing risers.
This solution had two problems. Firstly, the non-standard form factor of expansion boards limited the choice of the client, since under the UIO form factor, there are only a few controllers SAS, InfiniBand and Ehternet. Secondly, there are not enough PCI-E lines in the slots for risers - only 36, of which there are only 24 lines for the left riser, which is clearly not enough for four boards with PCI-E x8.
What is WIO? At first it turned out that it was possible to place four boards in the left riser without having to "turn the sandwich butter up", and there were risers for regular boards (RSC-R2UU-A4E8+). Then the problem of the lack of lines (now there are 80) was solved by using slots with a higher pin density.
UIO riser RSC-R2UU-UA3E8+
WIO riser RSC-R2UW-4E8

Results:
  • WIO risers cannot be installed in UIO motherboards (eg X8DTU-F).
  • UIO risers cannot be installed in new WIO boards.
  • There are risers for WIO (on the motherboard) that have a UIO slot for cards. In case you still have UIO controllers. They are used in platforms under Socket B2 (6027B-URF, 1027B-URF, 6017B-URF).
  • New controllers in the UIO form factor will not appear. For example, the USAS2LP-H8iR controller on the LSI 2108 chip will be the last one, there will be no LSI 2208 for UIO - only a regular MD2 with PCI-E x8.

PCI-E controllers

At the moment, three varieties are relevant: RAID controllers based on LSI 2108/2208 and HBA based on LSI 2308. There is also a mysterious SAS2 HBA AOC-SAS2LP-MV8 on a Marvel 9480 chip, but write about it because of its exoticism. Most use cases for internal SAS HBAs are storage with ZFS under FreeBSD and various flavors of Solaris. Due to the absence of problems with support in these operating systems, the choice in 100% of cases falls on LSI 2008/2308.
LSI 2108
In addition to UIO "shny AOC-USAS2LP-H8iR, which is mentioned in two more controllers were added:

AOC-SAS2LP-H8iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 8 internal ports (2x SFF-8087). It is an analogue of the LSI 9260-8i controller, but manufactured by Supermicro, there are minor differences in the layout of the board, the price is $40-50 lower than LSI. All additional LSI options are supported: activation, FastPath and CacheCade 2.0, cache battery protection - LSIiBBU07 and LSIiBBU08 (now it is preferable to use BBU08, it has an extended temperature range and comes with a cable for remote mounting).
Despite the emergence of more powerful controllers based on the LSI 2208, the LSI 2108 is still relevant due to the price reduction. Performance with conventional HDDs is enough in any scenario, the IOPS limit for working with SSDs is 150,000, which is more than enough for most budget solutions.

AOC-SAS2LP-H4iR
LSI 2108, SAS2 RAID 0/1/5/6/10/50/60, 512MB cache, 4 internal + 4 external ports. It is an analogue of the LSI 9280-4i4e controller. Convenient for use in expander cases, as you don’t have to bring the output from the expander outside to connect additional JBODs, or in 1U cases for 4 disks, if necessary, provide the ability to increase the number of disks. Supports the same BBUs and activation keys.
LSI 2208

AOC-S2208L-H8iR
LSI 2208, SAS2 RAID 0/1/5/6/10/50/60, 1024MB cache, 8 internal ports (2 SFF-8087 connectors). It is an analogue of the LSI 9271-8i controller. The LSI 2208 is a further development of the LSI 2108. The processor became dual-core, which made it possible to raise the performance limit in terms of IOPS "m up to 465000. Support for PCI-E 3.0 was added and increased to 1GB cache.
The controller supports BBU09 battery cache protection and CacheVault flash protection. Supermicro supplies them under part numbers BTR-0022L-LSI00279 and BTR-0024L-LSI00297, but it is easier to purchase from us through the LSI sales channel (the second part of the part numbers is the native LSI part numbers). MegaRAID Advanced Software Options activation keys are also supported, part number: AOC-SAS2-FSPT-ESW (FastPath) and AOCCHCD-PRO2-KEY (CacheCade Pro 2.0).
LSI 2308 (HBA)

AOC-S2308L-L8i and AOC-S2308L-L8e
LSI 2308, SAS2 HBA (with IR firmware - RAID 0/1/1E), 8 internal ports (2 SFF-8087 connectors). This is the same controller, it comes with different firmware. AOC-S2308L-L8e - IT firmware (pure HBA), AOC-S2308L-L8i - IR firmware (supporting RAID 0/1/1E). The difference is that L8i can work with IR and IT firmware, L8e can only work with IT, firmware in IR is locked. It is an analogue of the LSI 9207-8 controller i. Differences from LSI 2008: a faster chip (800 MHz, as a result - the IOPS limit has risen to 650 thousand), PCI-E 3.0 support has appeared. Application: software RAIDs (ZFS, for example), budget servers.
Based on this chip, there will be no cheap controllers supporting RAID-5 (iMR stack, out of ready-made controllers - LSI 9240).

Onboard controllers

In the latest products (X9 boards and platforms with them), Supermicro denotes the presence of a SAS2 controller from LSI with the number "7" in the part number, the number "3" indicates the chipset SAS (Intel C600). It just doesn't differentiate between the LSI 2208 and 2308, so be careful when choosing a board.
  • The LSI 2208-based controller soldered on motherboards has a maximum limit of 16 disks. If you add 17, it will simply not be detected, and you will see the message "PD is not supported" in the MSM log. Compensation for this is much more low price. For example, a bundle "X9DRHi-F + external controller LSI 9271-8i" will cost about $500 more than an X9DRH-7F with LSI 2008 on board. Bypassing this limitation by flashing in LSI 9271 will not work - flashing another SBR block, as in the case of LSI 2108, does not help.
  • Another feature is the lack of support for CacheVault modules, there is simply not enough space on the boards for a special connector, so only BBU09 is supported. The ability to install the BBU09 depends on the enclosure used. For example, the LSI 2208 is used in the 7127R-S6 blade servers, there is a BBU connector, but to mount the module itself, you need an additional MCP-640-00068-0N Battery Holder Bracket.
  • The SAS HBA (LSI 2308) firmware will now be required, since in DOS on any of the boards with LSI 2308 sas2flash.exe does not start with the error "Failed to initialize PAL".

Controllers in Twin and FatTwin platforms

Some 2U Twin 2 platforms come in three versions, with three types of controllers. For example:
  • 2027TR-HTRF+ - Chipset SATA
  • 2027TR-H70RF+ - LSI 2008
  • 2027TR-H71RF+ - LSI 2108
  • 2027TR-H72RF+ - LSI 2208
Such diversity is ensured by the fact that the controllers are placed on a special backplane that connects to a special slot on the motherboard and to the disk backplane.
BPN-ADP-SAS2-H6IR (LSI 2108)


BPN-ADP-S2208L-H6iR (LSI 2208)

BPN-ADP-SAS2-L6i (LSI 2008)

Supermicro xxxBE16/xxxBE26 Enclosures

Another topic that is directly related to controllers is the modernization of cases with . Varieties have appeared with an additional basket for two 2.5" disks located on the rear panel of the case. The purpose is a dedicated disk (or mirror) for loading the system. Of course, the system can be loaded by selecting a small volume from another disk group or from additional disks fixed inside the case (in 846 cases, you can install additional fasteners for one 3.5" or two 2.5" drives), but the updated modifications are much more convenient:




And these additional disks it is not necessary to connect it to the chipset SATA controller. Using the SFF8087->4xSATA cable, you can connect to the main SAS controller through the expander's SAS output.
P.S. Hope the information was helpful. Don't forget that the most full information And technical support for products from Supermicro, LSI, Adaptec by PMC, and other vendors, contact True System.

Today's file server or web server is indispensable without a RAID array. Only this mode of operation can provide the required throughput and speed of work with the storage system. Until recently, the only hard drives suitable for such work were SCSI drives with a spindle speed of 10-15 thousand revolutions per minute. These drives required a separate SCSI controller to operate. The data transfer rate over SCSI reached 320 Mb / s, however, the SCSI interface is a regular parallel interface, with all its shortcomings.

More recently, a new disk interface. It was called SAS (Serial Attached SCSI). Recreation centers in Chelyabinsk - Today, many companies already have controllers for this interface in their product line that support all levels of RAID arrays. In our mini-review, we'll take a look at two members of Adaptec's new SAS controller family. These are the 8 port model ASR-4800SAS and the 4+4 port model ASR-48300 12C.

Introduction to SAS

What kind of interface is this - SAS? Actually SAS is a hybrid of SATA and SCSI. The technology has absorbed the advantages of two interfaces. Let's start with the fact that SATA is a serial interface with two independent read and write channels, and each SATA device is connected to a separate channel. SCSI has a very efficient and reliable enterprise data transfer protocol, but the downside is the parallel interface and shared bus for multiple devices. Thus, SAS is free from the disadvantages of SCSI, has the advantages of SATA and provides speeds up to 300 Mb / s per channel. According to the diagram below, you can roughly imagine the connection scheme for SCSI and SAS.

The bidirectionality of the interface reduces latency to zero, since there is no channel switching to read / write.

curious and positive feature Serial Attached SCSI is that this interface supports SAS and SATA drives, and both types of drives can be connected to one controller at the same time. However, SAS drives cannot be connected to a SATA controller, since these drives, firstly, require special SCSI (Serial SCSI Protocol) commands to operate, and secondly, are physically incompatible with a SATA block. Each SAS drive connects to its own port, but it is still possible to connect more drives than the controller has ports. SAS-extenders (Expander) provide this possibility.

The original difference between a SAS disk header and a SATA disk header is an additional data port, that is, each Serial Attached SCSI disk has two SAS ports with its own original ID, thus the technology provides redundancy, which improves reliability.

SAS cables are slightly different from SATA, and there is a special cable accessory included with the SAS controller. As well as SCSI, hard drives of the new standard can be connected not only inside the server case, but also outside, for which special cables and equipment. To connect "hot-swappable" disks, special boards are used - backplane, which have all the necessary connectors and ports for connecting disks and controllers.

As a rule, the backplane board is located in a special case with disk sled mounting, such a case contains a RAID array and provides its cooling. In the event of failure of one or several disks, it is possible to quickly replace a failed HDD, and replacing a failed drive does not stop the operation of the array - just change the disk and the array is fully functional again.

Adaptec SAS Adapters

Adaptec has presented two rather interesting models of RAID controllers for your consideration. The first model is a representative of the budget class of devices for building RAID in low-cost servers entry level is an eight-port model ASR-48300 12C. The second model is much more advanced and designed for more serious tasks, it has eight SAS channels on board - this is the ASR-4800SAS. But let's take a closer look at each of them. Let's start with a simpler and cheaper model.

Adaptec ASR-48300 12C

The ASR-48300 12C controller is designed to build small RAID arrays of levels 0, 1 and 10. Thus, the main types of disk arrays can be built using this controller. Supplied this model in an ordinary cardboard box, which is decorated in blue and black tones, on the front side of the package there is a stylized image of a controller flying from a computer, which should evoke thoughts of high speed operation of the computer with this unit inside.

The scope of delivery is minimal, but includes everything you need to get started with the controller. The kit contains the following.

Controller ASR-48300 12C
. Low profile brace

. Storage Manager CD
. Brief manual
. Connecting cable with connectors SFF8484 to 4xSFF8482 and power supply 0.5 m.

The controller is designed for the PCI-X 133 MHz bus, which is very widespread in server platforms. The adapter provides eight SAS ports, however, only four ports are implemented in the form of an SFF8484 connector, to which drives are connected inside the case, and the remaining four channels are brought out in the form of an SFF8470 connector, so some of the drives must be connected from the outside - this can be an external box with four drives inside.

When using the expander, the controller has the ability to work with 128 disks in the array. In addition, the controller is able to work in a 64-bit environment and supports the corresponding commands. The card can be installed in a 2U low profile server with the included low profile blanking plate. General characteristics fees are as follows.

Advantages

Cost-effective Serial Attached SCSI controller with Adaptec HostRAID™ technology for high performance critical data storage.

Client needs

Ideal for supporting entry and mid-range server and workgroup applications that require high performance storage and robust protection such as backup applications, web content, Email, databases and data sharing.

System Environment - Department and Workgroup Servers

System bus interface type - PCI-X 64 bit/133 MHz, PCI 33/66

External connections - One x 4 Infiniband/Serial Attached SCSI (SFF8470)

Internal connections - One 32 pin x 4 Serial Attached SCSI (SFF8484)

System Requirements - Server Type IA-32, AMD-32, EM64T and AMD-64

32/64-bit PCI 2.2 or 32/64-bit PCI-X 133 slot

Warranty - 3 years

RAID levels - Adaptec HostRAID 0, 1, and 10

Key features of RAID

  • Support for boot arrays
  • Automatic recovery
  • Management with Adaptec Storage Manager software
  • Background initialization

Board dimensions - 6.35cm x 17.78cm (including external connector)

Operating temperature - 0° to 50° C

Power dissipation - 4 W

Mean Time Before Failure (MTBF - time between failures) - 1692573 hours at 40 ºC.

Adaptec ASR-4800SAS

Adapter number 4800 is more functionally advanced. This model is positioned for faster servers and workstations. It supports almost any RAID arrays - arrays that are available on the younger model, and you can also configure arrays of RAID 5, 50, JBOD and Adaptec Advanced Data Protection Suite with RAID 1E, 5EE, 6, 60, Copyback Hot Spare with the Snapshot Backup option for tower servers and high-density rack servers.

The model comes in a package similar to the junior model with the design in the same "aviation" style.

The kit contains almost the same as the junior card.

Controller ASR-4800SAS
. Full size brace
. Driver disk and complete guide
. Storage Manager CD
. Brief manual
. Two cables with connectors SFF8484 to 4xSFF8482 and power supply 1 m each.

The controller has support for the 133 MHz PCI-X bus, but there is also a 4805 model that is similar in functionality, but uses PCI-E bus x8. The adapter provides the same eight SAS ports, however, all eight ports are implemented as internal ones, respectively, the board has two SFF8484 connectors (for two bundled cables), however, there is also an external connector of the SFF8470 type for four channels, when connected to which one of the internal connectors turns off.

In the same way as in the younger device, the number of disks is expandable up to 128 using expanders. But the main difference between the ASR-4800SAS model and the ASR-48300 12C is the presence of 128 MB DDR2 on the first ECC memory used as a cache, which speeds up work with disk array and optimizes work with small files. An optional battery module is available to save data in the cache when the power is turned off. The general characteristics of the board are as follows.

Benefits - High performance storage and data protection connectivity for servers and workstations

Customer Needs — Ideal for supporting server and workgroup applications that require consistently high levels of read/write performance, such as video streaming, web content, video-on-demand, fixed content, and reference data storage.

  • System Environment - Department and Workgroup Servers and Workstations
  • System Bus Interface Type - PCI-X 64-bit/133 MHz host interface
  • External connections - SAS connector one x4
  • Internal connections - SAS connectors two x4
  • Data Transfer Rate - Up to 3 GB/s per port
  • System Requirements - Intel or AMD architecture with free 64-bit 3.3v PCI-X slot
  • Supports EM64T and AMD64 architectures
  • Warranty - 3 years
  • Standard RAID Levels - RAID 0, 1, 10, 5, 50
  • Standard RAID Features - Hot Spare, RAID Level Migration, Online Capacity Expansion, Optimized Disk, Utilization, S.M.A.R.T and SNMP support, plus features from Adaptec Advanced
  • Data Protection Suite including:
  1. Hot Space (RAID 5EE)
  2. Striped Mirror (RAID 1E)
  3. Dual Drive Failure Protection (RAID 6)
  4. Copyback Hot Spare
  • Advanced RAID Features - Snapshot Backup
  • Board dimensions - 24cm x 11.5cm
  • Operating temperature - 0 to 55 degrees C
  • Mean Time Before Failure (MTBF) - 931924 hours at 40 ºC.

Testing

Testing adapters is tricky business. Moreover, we have not yet acquired much experience with SAS. Therefore, it was decided to test the speed work hard drives with SAS interface compared to SATA drives. To do this, we used our existing 73 GB Hitachi HUS151473VLS300 15000rpm SAS drives with 16Mb buffer and WD 150GB SATA150 Raptor WD1500ADFD 10000rpm drives with 16Mb buffer. We made a direct comparison of two fast drives, but with different interfaces on two controllers. The disks were tested in the HDTach program, in which the following results were obtained.

Adaptec ASR-48300 12C

Adaptec ASR-4800SAS

It was logical to assume that HDD SAS will be faster than SATA, although we used the fastest WD Raptor drive to evaluate performance, which can easily compete with many 15000 RPM SCSI drives in terms of performance. As for the differences between the controllers, they are minimal. Of course, the older model provides more features, but the need for them arises only in the corporate sector for the use of such devices. These enterprise features include special RAID levels and additional cache memory on board the controller. An ordinary home user is unlikely to install 8 hard drives assembled in a redundant RAID array in a home, albeit up to the very roof of a modified PC - rather, preference will be given to using four drives for a 0 + 1 array, and the rest will be used for data. This is where the ASR-48300 12C comes in handy. In addition, some overclocker motherboards have a PCI-X interface. The advantage of the model for home use is the relatively affordable price (compared to eight hard drives) of $350 and ease of use (inserted and connected). In addition, 2.5-inch 10K hard drives are of particular interest. These hard drives have lower power consumption, heat up less and take up less space.

conclusions

This is an unusual review for our site and is more about exploring user interest in specialty hardware reviews. Today, not only two unusual RAID controllers from the well-known and well-established manufacturer of server hardware, Adaptec, were considered. It is also an attempt to write the first analytical article on our website.

As for our today's heroes, Adaptec's SAS controllers, we can say that the next two products of the company were a success. The junior model, the $350 ASR-48300, may well take root in a productive home computer and even more so in an entry-level server (or a computer that performs its role). For this, the model has all the prerequisites: convenient software Adaptec Storage Manager, support from 8 to 128 disks, work with basic RAID levels.

The older model is designed for serious tasks and, of course, can be used in low-cost servers, but only if there are special requirements for the speed of working with small files and the reliability of information storage, because the card supports all levels of enterprise-class RAID arrays with redundancy and has 128 MB fast DDR2 cache with Error Correction Control (ECC). The cost of the controller is $950.

ASR-48300 12C

Model advantages

  • Availability
  • Support from 8 to 128 disks
  • Ease of use
  • Stable work
  • Reputation
  • PCI-X slot - for greater popularity, only support for the more common PCI-E is missing

ASR-4800SAS

  • Stable work
  • Manufacturer reputation
  • Good functionality
  • Availability of upgrades (software and hardware)
  • Availability of PCI-E version
  • Ease of use
  • Support from 8 to 128 disks
  • 8 internal SAS links
  • Not very suitable for budget and home use sectors.

In modern computer systems, SATA and SAS interfaces are used to connect the main hard drives. As a rule, the first option suits home workstations, the second - server ones, so the technologies do not compete with each other, meeting different requirements. The significant difference in cost and memory size makes users wonder how SAS differs from SATA and look for compromises. Let's see if this makes sense.

SAS(Serial Attached SCSI) is a serial interface for connecting storage devices, developed on the basis of parallel SCSI to execute the same set of commands. Used primarily in server systems.

SATA(Serial ATA) is a serial data exchange interface based on parallel PATA (IDE). It is used in home, office, multimedia PCs and laptops.

If we talk about the HDD, then, despite the differing specifications and connectors, there are no cardinal differences between the devices. Backward one-way compatibility makes it possible to connect disks to the server board using both one and the second interface.

It is worth noting that both connection options are also real for SSDs, but the significant difference between SAS and SATA in this case will be in the cost of the drive: the first can be dozens of times more expensive with a comparable volume. Therefore, today such a solution, if not rare, is sufficiently balanced, and is intended for fast corporate-level data centers.

Comparison

As we already know, SAS is used in servers, SATA - in home systems. In practice, this means that many users access the former at the same time and solve many tasks, while the latter is dealt with by one person. Accordingly, the server load is much higher, so the disks must be sufficiently fault-tolerant and fast. The SCSI protocols (SSP, SMP, STP) implemented in SAS allow you to process more I / O operations at the same time.

Directly for HDD, the speed of access is determined primarily by the speed of rotation of the spindle. For desktop systems and laptops, 5400 - 7200 RPM is necessary and sufficient. Accordingly, it is almost impossible to find a SATA drive with 10,000 RPM (except to look at the WD VelociRaptor series, again designed for workstations), and anything higher is absolutely unattainable. SAS HDD spins at least 7200 RPM, 10000 RPM can be considered the standard, and 15000 RPM is a sufficient maximum.

Serial SCSI drives are considered to be more reliable and have higher MTBF. In practice, stability is achieved more due to the checksum verification function. SATA drives, on the other hand, suffer from “silent errors”, when data is partially written or corrupted, which leads to bad sectors.

The main advantage of SAS also works for the fault tolerance of the system - two duplex ports that allow you to connect one device via two channels. In this case, information exchange will be carried out simultaneously in both directions, and reliability is ensured by Multipath I / O technology (two controllers insure each other and share the load). The queue of tagged commands is built up to a depth of 256. Most SATA drives have one half-duplex port, and the queue depth using NCQ technology is no more than 32.

The SAS interface assumes the use of cables up to 10 m long. Up to 255 devices can be connected to one port through expanders. SATA is limited to 1m (2m for eSATA), and only supports point-to-point connection of one device.

prospects further development- what is the difference between SAS and SATA is also felt quite sharply. The bandwidth of the SAS interface reaches 12 Gb / s, and manufacturers announce support for data transfer rates of 24 Gb / s. The latest revision of SATA stopped at 6 Gb / s and will not evolve in this regard.

SATA drives in terms of the cost of 1 GB have a very attractive price tag. In systems where the speed of access to data is not critical, and the amount of stored information is large, it is advisable to use them.

Table

SAS SATA
For server systemsPrimarily for desktop and mobile systems
Uses the SCSI command setUses the ATA command set
Minimum spindle speed HDD 7200 RPM, maximum - 15000 RPM5400 RPM minimum, 7200 RPM maximum
Supports checksum verification technology when writing dataA large percentage of errors and bad sectors
Two duplex portsOne half duplex port
Multipath I/O supportedPoint-to-point connection
Command queue up to 256Command queue up to 32
Cables up to 10 m can be usedCable length no more than 1 m
Bus bandwidth up to 12 Gb/s (in the future - 24 Gb/s)Bandwidth 6 Gbps (SATA III)
The cost of drives is higher, sometimes significantlyCheaper in terms of price per 1 GB