fbpx
Wikipedia

PCI Express

PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e,[1] is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards, sound cards, hard disk drive host adapters, SSDs, Wi-Fi and Ethernet hardware connections.[2] PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance scaling for bus devices, a more detailed error detection and reporting mechanism (Advanced Error Reporting, AER),[3] and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization.

PCI Express
Peripheral Component Interconnect Express
PCI Express logo
Year created2003; 20 years ago (2003)
Created by
Supersedes
Width in bits1 per lane (up to 16 lanes)
No. of devices1 on each endpoint of each connection.[a]
SpeedDual simplex; examples in single-lane (x1) and 16-lane (x16):
  • Version 1.x: 2.5 GT/s
    • x1: 250 MB/s
    • x16: 4 GB/s
  • Version 2.x: 5 GT/s
    • x1: 500 MB/s
    • x16: 8 GB/s
  • Version 3.x: 8 GT/s
    • x1: 985 MB/s
    • x16: 15.75 GB/s
  • Version 4.0: 16 GT/s
    • x1: 1.97 GB/s
    • x16: 31.5 GB/s
  • Version 5.0: 32 GT/s
    • x1: 3.94 GB/s
    • x16: 63 GB/s
  • Version 6.0: 64 GT/s
    • x1: 7.56 GB/s
    • x16: 121 GB/s
  • Version 7.0: 128 GT/s
    • x1: 15.13 GB/s
    • x16: 242 GB/s
StyleSerial
Hotplugging interfaceYes (with ExpressCard, OCuLink, CFexpress or U.2)
External interfaceYes (with OCuLink or PCI Express External Cabling)
Websitepcisig.com
Two types of PCIe slot on an Asus H81M-K motherboard
Various slots on a computer motherboard, from top to bottom:
  • PCI Express x4
  • PCI Express x16
  • PCI Express x1
  • PCI Express X16
  • Conventional PCI (32-bit, 5 V)

The PCI Express electrical interface is measured by the number of simultaneous lanes.[4] (A lane is a single send/receive line of data. The analogy is a highway with traffic in both directions.) The interface is also used in a variety of other standards — most notably the laptop expansion card interface called ExpressCard. It is also used in the storage interfaces of SATA Express, U.2 (SFF-8639) and M.2.

Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group) — a group of more than 900 companies that also maintains the conventional PCI specifications.

Architecture

 
Example of the PCI Express topology:
white "junction boxes" represent PCI Express device downstream ports, while the gray ones represent upstream ports.[5]: 7 
 
PCI Express x1 card containing a PCI Express switch (covered by a small heat sink), which creates multiple endpoints out of one endpoint and lets multiple devices share it
 
The PCIe slots on a motherboard are often labeled with the number of PCIe lanes they have. Sometimes what may seem like a large slot may only have a few lanes. For instance, an x16 slot with only 4 PCIe lanes is quite common.[6]

Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus.[7] One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines. In contrast, PCI Express is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Because of its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCI Express bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.

In terms of bus protocol, PCI Express communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCI Express port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCI Express slots are not interchangeable. At the software level, PCI Express preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCI Express devices without explicit support for the PCI Express standard, though new PCI Express features are inaccessible.

The PCI Express link between two devices can vary in size from one to 16 lanes. In a multi-lane link, the packet data is striped across lanes, and peak data throughput scales with the overall link width. The lane count is automatically negotiated during device initialization and can be restricted by either endpoint. For example, a single-lane PCI Express (x1) card can be inserted into a multi-lane slot (x4, x8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure itself to use fewer lanes, providing a failure tolerance in case bad or unreliable lanes are present. The PCI Express standard defines link widths of x1, x2, x4, x8, and x16. Up to and including PCIe 5.0, x12, and x32 links were defined as well but never used.[8] This allows the PCI Express bus to serve both cost-sensitive applications where high throughput is not needed, and performance-critical applications such as 3D graphics, networking (10 Gigabit Ethernet or multiport Gigabit Ethernet), and enterprise storage (SAS or Fibre Channel). Slots and connectors are only defined for a subset of these widths, with link widths in between using the next larger physical slot size.

As a point of reference, a PCI-X (133 MHz 64-bit) device and a PCI Express 1.0 device using four lanes (x4) have roughly the same peak single-direction transfer rate of 1064 MB/s. The PCI Express bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data simultaneously, or if communication with the PCI Express peripheral is bidirectional.

Interconnect

 
A PCI Express link between two devices consists of one or more lanes, which are dual simplex channels using two differential signaling pairs.[5]: 3 

PCI Express devices communicate via a logical connection called an interconnect[9] or link. A link is a point-to-point communication channel between two PCI Express ports allowing both of them to send and receive ordinary PCI requests (configuration, I/O or memory read/write) and interrupts (INTx, MSI or MSI-X). At the physical level, a link is composed of one or more lanes.[9] Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (x1) link, while a graphics adapter typically uses a much wider and therefore faster 16-lane (x16) link.

Lane

A lane is composed of two differential signaling pairs, with one pair for receiving data and the other for transmitting. Thus, each lane is composed of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream, transporting data packets in eight-bit "byte" format simultaneously in both directions between endpoints of a link.[10] Physical PCI Express links may contain 1, 4, 8 or 16 lanes.[11][5]: 4, 5 [9] Lane counts are written with an "x" prefix (for example, "x8" represents an eight-lane card or slot), with x16 being the largest size in common use.[12] Lane sizes are also referred to via the terms "width" or "by" e.g., an eight-lane slot could be referred to as a "by 8" or as "8 lanes wide."

For mechanical card sizes, see below.

Serial bus

The bonded serial bus architecture was chosen over the traditional parallel bus because of the inherent limitations of the latter, including half-duplex operation, excess signal count, and inherently lower bandwidth due to timing skew. Timing skew results from separate electrical signals within a parallel interface traveling through conductors of different lengths, on potentially different printed circuit board (PCB) layers, and at possibly different signal velocities. Despite being transmitted simultaneously as a single word, signals on a parallel interface have different travel duration and arrive at their destinations at different times. When the interface clock period is shorter than the largest time difference between signal arrivals, recovery of the transmitted word is no longer possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz.

 
Highly simplified topologies of the Legacy PCI Shared (Parallel) Interface and the PCIe Serial Point-to-Point Interface[13]

A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information is embedded within the serial signal itself. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCI Express is one example of the general trend toward replacing parallel buses with serial interconnects; other examples include Serial ATA (SATA), USB, Serial Attached SCSI (SAS), FireWire (IEEE 1394), and RapidIO. In digital video, examples in common use are DVI, HDMI, and DisplayPort.

Multichannel serial design increases flexibility with its ability to allocate fewer lanes for slower devices.

Form factors

PCI Express (standard)

 
Intel P3608 NVMe flash SSD, PCI-E add-in card

A PCI Express card fits into a slot of its physical size or larger (with x16 as the largest used), but may not fit into a smaller PCI Express slot; for example, a x16 card may not fit into a x4 or x8 slot. Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection.

The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size. An example is a x16 slot that runs at x4, which accepts any x1, x2, x4, x8 or x16 card, but provides only four lanes. Its specification may read as "x16 (x4 mode)", while "mechanical @ electrical" notation (e.g. "x16 @ x4") is also common.[citation needed] The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate. Standard mechanical sizes are x1, x4, x8, and x16. Cards with a differing number of lanes need to use the next larger mechanical size (i.e. a x2 card uses the x4 size, or a x12 card uses the x16 size).

The cards themselves are designed and manufactured in various sizes. For example, solid-state drives (SSDs) that come in the form of PCI Express cards often use HHHL (half height, half length) and FHHL (full height, half length) to describe the physical dimensions of the card.[14][15]

PCI card type Dimensions height × length × width, maximum
(mm) (in)
Full-Length 111.15 × 312.00 × 20.32 4.376 × 12.283 × 0.8
Half-Length 111.15 × 167.65 × 20.32 4.376 × 06.600 × 0.8
Low-Profile/Slim 068.90 × 167.65 × 20.32 2.731 × 06.600 × 0.8

Non-standard video card form factors

Modern (since c.2012[16]) gaming video cards usually exceed the height as well as thickness specified in the PCI Express standard, due to the need for more capable and quieter cooling fans, as gaming video cards often emit hundreds of watts of heat.[17] Modern computer cases are often wider to accommodate these taller cards, but not always. Since full-length cards (312 mm) are uncommon, modern cases sometimes cannot fit those. The thickness of these cards also typically occupies the space of 2 PCIe slots. In fact, even the methodology of how to measure the cards varies between vendors, with some including the metal bracket size in dimensions and others not.

For instance, a 2020 Sapphire card measures 135 mm in height (excluding the metal bracket), which exceeds the PCIe standard height by 28 mm.[18] Another card by XFX measures 55 mm thick (i.e. 2.7 PCI slots at 20.32 mm), taking up 3 PCIe slots.[19] The Asus GeForce RTX 3080 10 GB STRIX GAMING OC video card is a two-slot card that has dimensions of 318.5 mm × 140.1 mm × 57.8 mm, exceeding PCI Express' maximum length, height, and thickness respectively.[20]

Pinout

The following table identifies the conductors on each side of the edge connector on a PCI Express card. The solder side of the printed circuit board (PCB) is the A-side, and the component side is the B-side.[21] PRSNT1# and PRSNT2# pins must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted. The WAKE# pin uses full voltage to wake the computer, but must be pulled high from the standby power to indicate that the card is wake capable.[22]

PCI Express connector pinout (x1, x4, x8 and x16 variants)
Pin Side B Side A Description Pin Side B Side A Description
01 +12 V PRSNT1# Must connect to farthest PRSNT2# pin 50 HSOp(8) Reserved Lane 8 transmit data, + and −
02 +12 V +12 V Main power pins 51 HSOn(8) Ground
03 +12 V +12 V 52 Ground HSIp(8) Lane 8 receive data, + and −
04 Ground Ground 53 Ground HSIn(8)
05 SMCLK TCK SMBus and JTAG port pins 54 HSOp(9) Ground Lane 9 transmit data, + and −
06 SMDAT TDI 55 HSOn(9) Ground
07 Ground TDO 56 Ground HSIp(9) Lane 9 receive data, + and −
08 +3.3 V TMS 57 Ground HSIn(9)
09 TRST# +3.3 V 58 HSOp(10) Ground Lane 10 transmit data, + and −
10 +3.3 V aux +3.3 V Aux power & Standby power 59 HSOn(10) Ground
11 WAKE# PERST# Link reactivation; fundamental reset [23] 60 Ground HSIp(10) Lane 10 receive data, + and −
Key notch 61 Ground HSIn(10)
12 CLKREQ#[24] Ground Clock Request Signal 62 HSOp(11) Ground Lane 11 transmit data, + and −
13 Ground REFCLK+ Reference clock differential pair 63 HSOn(11) Ground
14 HSOp(0) REFCLK− Lane 0 transmit data, + and − 64 Ground HSIp(11) Lane 11 receive data, + and −
15 HSOn(0) Ground 65 Ground HSIn(11)
16 Ground HSIp(0) Lane 0 receive data, + and − 66 HSOp(12) Ground Lane 12 transmit data, + and −
17 PRSNT2# HSIn(0) 67 HSOn(12) Ground
18 Ground Ground 68 Ground HSIp(12) Lane 12 receive data, + and −
PCI Express x1 cards end at pin 18 69 Ground HSIn(12)
19 HSOp(1) Reserved Lane 1 transmit data, + and − 70 HSOp(13) Ground Lane 13 transmit data, + and −
20 HSOn(1) Ground 71 HSOn(13) Ground
21 Ground HSIp(1) Lane 1 receive data, + and − 72 Ground HSIp(13) Lane 13 receive data, + and −
22 Ground HSIn(1) 73 Ground HSIn(13)
23 HSOp(2) Ground Lane 2 transmit data, + and − 74 HSOp(14) Ground Lane 14 transmit data, + and −
24 HSOn(2) Ground 75 HSOn(14) Ground
25 Ground HSIp(2) Lane 2 receive data, + and − 76 Ground HSIp(14) Lane 14 receive data, + and −
26 Ground HSIn(2) 77 Ground HSIn(14)
27 HSOp(3) Ground Lane 3 transmit data, + and − 78 HSOp(15) Ground Lane 15 transmit data, + and −
28 HSOn(3) Ground 79 HSOn(15) Ground
29 Ground HSIp(3) Lane 3 receive data, + and − 80 Ground HSIp(15) Lane 15 receive data, + and −
30 PWRBRK#[25] HSIn(3) 81 PRSNT2# HSIn(15)
31 PRSNT2# Ground 82 Reserved Ground
32 Ground Reserved
PCI Express x4 cards end at pin 32
33 HSOp(4) Reserved Lane 4 transmit data, + and −
34 HSOn(4) Ground
35 Ground HSIp(4) Lane 4 receive data, + and −
36 Ground HSIn(4)
37 HSOp(5) Ground Lane 5 transmit data, + and −
38 HSOn(5) Ground
39 Ground HSIp(5) Lane 5 receive data, + and −
40 Ground HSIn(5)
41 HSOp(6) Ground Lane 6 transmit data, + and −
42 HSOn(6) Ground
43 Ground HSIp(6) Lane 6 receive data, + and − Legend
44 Ground HSIn(6) Ground pin Zero volt reference
45 HSOp(7) Ground Lane 7 transmit data, + and − Power pin Supplies power to the PCIe card
46 HSOn(7) Ground Card-to-host pin Signal from the card to the motherboard
47 Ground HSIp(7) Lane 7 receive data, + and − Host-to-card pin Signal from the motherboard to the card
48 PRSNT2# HSIn(7) Open drain May be pulled low or sensed by multiple cards
49 Ground Ground Sense pin Tied together on card
PCI Express x8 cards end at pin 49 Reserved Not presently used, do not connect

Power

 
8-pin (left) and 6-pin (right) power connectors used on PCI Express cards

All PCI express cards may consume up to A at +3.3 V (9.9 W). The amount of +12 V and total power they may consume depends on the form factor and the role of the card:[26]: 35–36 [27][28]

  • x1 cards are limited to 0.5 A at +12 V (6 W) and 10 W combined.
  • x4 and wider cards are limited to 2.1 A at +12 V (25 W) and 25 W combined.
  • A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a high-power device.
  • A full-sized x16 graphics card may draw up to 5.5 A at +12 V (66 W) and 75 W combined after initialization and software configuration as a high-power device.[22]: 38–39 
 
The main 12 V power supply for the PCIe slot is pins B2, B3 (side B) and pins A2, A3 (side A). Power standby 3.3 V is pin B10 and A10. PCIe x1 cards can receive up to 25 W and x16 graphics cards can receive up to 75 W, combined.[29]

Optional connectors add 75 W (6-pin) or 150 W (8-pin) of +12 V power for up to 300 W total (2 × 75 W + 1 × 150 W).

  • Sense0 pin is connected to ground by the cable or power supply, or float on board if cable is not connected.
  • Sense1 pin is connected to ground by the cable or power supply, or float on board if cable is not connected.

Some cards use two 8-pin connectors, but this has not been standardized yet as of 2018, therefore such cards must not carry the official PCI Express logo. This configuration allows 375 W total (1 × 75 W + 2 × 150 W) and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard[needs update]. The 8-pin PCI Express connector could be confused with the EPS12V connector, which is mainly used for powering SMP and multi-core systems. The power connectors are variants of the Molex Mini-Fit Jr. series connectors. [30]

Molex Mini-Fit Jr. part numbers[30]
Pins Female/receptacle
on PS cable
Male/right-angle
header on PCB
6-pin 45559-0002 45558-0003
8-pin 45587-0004 45586-0005, 45586-0006
6-pin power connector (75 W)[31] 8-pin power connector (150 W)[32][33][34]
 
6 pin power connector pin map
 
8 pin power connector pin map
Pin Description Pin Description
1 +12 V 1 +12 V
2 Not connected (usually +12 V as well) 2 +12 V
3 +12 V 3 +12 V
4 Sense1 (8-pin connected[A])
4 Ground 5 Ground
5 Sense 6 Sense0 (6-pin or 8-pin connected)
6 Ground 7 Ground
8 Ground
  1. ^ When a 6-pin connector is plugged into an 8-pin receptacle the card is notified by a missing Sense1 that it may only use up to 75 W.

PCI Express Mini Card

 
A WLAN PCI Express Mini Card and its connector
 
MiniPCI and MiniPCI Express cards in comparison

PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, Mini PCI-E, mPCIe, and PEM), based on PCI Express, is a replacement for the Mini PCI form factor. It is developed by the PCI-SIG. The host device supports both PCI Express and USB 2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 use PCI Express for expansion cards; however, as of 2015, many vendors are moving toward using the newer M.2 form factor for this purpose.

Due to different dimensions, PCI Express Mini Cards are not physically compatible with standard full-size PCI Express slots; however, passive adapters exist that let them be used in full-size slots.[35]

Physical dimensions

Dimensions of PCI Express Mini Cards are 30 mm × 50.95 mm (width × length) for a Full Mini Card. There is a 52-pin edge connector, consisting of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap equivalent to four contacts, then a further 18 contacts. Boards have a thickness of 1.0 mm, excluding the components. A "Half Mini Card" (sometimes abbreviated as HMC) is also specified, having approximately half the physical length of 26.8 mm.

Electrical interface

PCI Express Mini Card edge connectors provide multiple connections and buses:

  • PCI Express x1 (with SMBus)
  • USB 2.0
  • Wires to diagnostics LEDs for wireless network (i.e., Wi-Fi) status on computer's chassis
  • SIM card for GSM and WCDMA applications (UIM signals on spec.)
  • Future extension for another PCIe lane
  • 1.5 V and 3.3 V power

Mini-SATA (mSATA) variant

 
An Intel mSATA SSD

Despite sharing the Mini PCI Express form factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. Notebooks such as Lenovo's ThinkPad T, W and X series, released in March–April 2011, have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560/Y570/Y580 also support mSATA.[36] On the contrary, the L-series among others can only support M.2 cards using the PCIe standard in the WWAN slot.

Some notebooks (notably the Asus Eee PC, the Apple MacBook Air, and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an SSD. This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe x1 bus intact.[37] This makes the "miniPCIe" flash and solid-state drives sold for netbooks largely incompatible with true PCI Express Mini implementations.

Also, the typical Asus miniPCIe SSD is 71 mm long, causing the Dell 51 mm model to often be (incorrectly) referred to as half length. A true 51 mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers that allow for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed.

Intel has numerous desktop boards with the PCIe x1 Mini-Card slot that typically do not support mSATA SSD. A list of desktop boards that natively support mSATA in the PCIe x1 Mini-Card slot (typically multiplexed with a SATA port) is provided on the Intel Support site.[38]

PCI Express M.2

M.2 replaces the mSATA standard and Mini PCIe.[39] Computer bus interfaces provided through the M.2 connector are PCI Express 3.0 (up to four lanes), Serial ATA 3.0, and USB 3.0 (a single logical port for each of the latter two). It is up to the manufacturer of the M.2 host or device to choose which interfaces to support, depending on the desired level of host support and device type.

PCI Express External Cabling

PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications were released by the PCI-SIG in February 2007.[40][41]

Standard cables and connectors have been defined for x1, x4, x8, and x16 link widths, with a transfer rate of 250 MB/s per lane. The PCI-SIG also expects the norm to evolve to reach 500 MB/s, as in PCI Express 2.0. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCIe slots and PCIe-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe specification.

PCI Express OCuLink

OCuLink (standing for "optical-copper link", since Cu is the chemical symbol for Copper) is an extension for the "cable version of PCI Express", acting as a competitor to version 3 of the Thunderbolt interface. Version 1.0 of OCuLink, released in Oct 2015, supports up to PCIe 3.0 x4 lanes (8 GT/s (gigatransfers per second), 3.9 GB/s) over copper cabling; a fiber optic version may appear in the future.

In its latest version OCuLink-2, it supports up to 16 GB/s (PCIe 4.0 x8)[42] while the maximum bandwidth of a full speed Thunderbolt 4 cable is 5 GB/s. Some suppliers may design their connector product to be able to support next generation PCI Express 5.0 running at 32 GT/s per lane for future proofing and minimizing development costs over the next few years.[42] Initially, PCI-SIG expected to bring OCuLink into laptops for connection of powerful external GPU boxes. It turned out to be a rare use. Instead, OCuLink became popular for PCIe interconnections in servers.[43]

Derivative forms

Numerous other form factors use, or are able to use, PCIe. These include:

  • Low-height card
  • ExpressCard: Successor to the PC Card form factor (with x1 PCIe and USB 2.0; hot-pluggable)
  • PCI Express ExpressModule: A hot-pluggable modular form factor defined for servers and workstations
  • XQD card: A PCI Express-based flash card standard by the CompactFlash Association with x2 PCIe
  • CFexpress card: A PCI Express-based flash card by the CompactFlash Association in three form factors supporting 1 to 4 PCIe lanes
  • SD card: The SD Express bus, introduced in version 7.0 of the SD specification uses an x1 PCIe link
  • XMC: Similar to the CMC/PMC form factor (VITA 42.3)
  • AdvancedTCA: A complement to CompactPCI for larger applications; supports serial based backplane topologies
  • AMC: A complement to the AdvancedTCA specification; supports processor and I/O modules on ATCA boards (x1, x2, x4 or x8 PCIe).
  • FeaturePak: A tiny expansion card format (43 mm × 65 mm) for embedded and small-form-factor applications, which implements two x1 PCIe links on a high-density connector along with USB, I2C, and up to 100 points of I/O
  • Universal IO: A variant from Super Micro Computer Inc designed for use in low-profile rack-mounted chassis.[44] It has the connector bracket reversed so it cannot fit in a normal PCI Express socket, but it is pin-compatible and may be inserted if the bracket is removed.
  • M.2 (formerly known as NGFF)
  • M-PCIe brings PCIe 3.0 to mobile devices (such as tablets and smartphones), over the M-PHY physical layer.[45][46]
  • U.2 (formerly known as SFF-8639)

The PCIe slot connector can also carry protocols other than PCIe. Some 9xx series Intel chipsets support Serial Digital Video Out, a proprietary technology that uses a slot to transmit video signals from the host CPU's integrated graphics instead of PCIe, using a supported add-in.

The PCIe transaction-layer protocol can also be used over some other interconnects, which are not electrically PCIe:

  • Thunderbolt: A royalty-free interconnect standard by Intel that combines DisplayPort and PCIe protocols in a form factor compatible with Mini DisplayPort. Thunderbolt 3.0 also combines USB 3.1 and uses the USB-C form factor as opposed to Mini DisplayPort.
  • USB4

History and revisions

While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. A technical working group named the Arapaho Work Group (AWG) drew up the standard. For initial drafts, the AWG consisted only of Intel engineers; subsequently, the AWG expanded to include industry partners.

Since, PCIe has undergone several large and smaller revisions, improving on performance and other features.

PCI Express link performance[47][48]
Version Intro-
duced
Line code Transfer
rate

per lane[i][ii]

Throughput[i][iii]
x1 x2 x4 x8 x16
1.0 2003 NRZ 8b/10b 2.5 GT/s 0.250 GB/s 0.500 GB/s 1.000 GB/s 2.000 GB/s 4.000 GB/s
2.0 2007 5.0 GT/s 0.500 GB/s 1.000 GB/s 2.000 GB/s 4.000 GB/s 8.000 GB/s
3.0 2010 128b/130b 8.0 GT/s 0.985 GB/s 1.969 GB/s 3.938 GB/s 07.877 GB/s 15.754 GB/s
4.0 2017 16.0 GT/s 1.969 GB/s 3.938 GB/s 07.877 GB/s 15.754 GB/s 031.508 GB/s
5.0 2019 32.0 GT/s 3.938 GB/s 07.877 GB/s 15.754 GB/s 31.508 GB/s 63.015 GB/s
6.0 2022 PAM-4
FEC
242B/256B
FLIT
64.0 GT/s
32.0 GBd
7.563 GB/s 15.125 GB/s 30.250 GB/s 60.500 GB/s 121.000 GB/s
7.0 2025 (planned) 128.0 GT/s
64.0 GBd
15.125 GB/s 30.250 GB/s 60.500 GB/s 121.000 GB/s 242.000 GB/s
Notes
  1. ^ a b In each direction (each lane is a dual simplex channel).
  2. ^ Transfer rate refers to the encoded serial bit rate; 2.5 GT/s means 2.5 Gbit/s serial data rate.
  3. ^ Throughput indicates the unencoded bandwidth (without 8b/10b, 128b/130b, or 242B/256B encoding overhead). The PCIe 1.0 transfer rate of 2.5 GT/s per lane means a 2.5 Gbit/s serial bit rate corresponding to a throughput of 2.0 Gbit/s or 250 MB/s prior to 8b/10b encoding.

PCI Express 1.0a

In 2003, PCI-SIG introduced PCIe 1.0a, with a per-lane data rate of 250 MB/s and a transfer rate of 2.5 gigatransfers per second (GT/s).

Transfer rate is expressed in transfers per second instead of bits per second because the number of transfers includes the overhead bits, which do not provide additional throughput;[49] PCIe 1.x uses an 8b/10b encoding scheme, resulting in a 20% (= 2/10) overhead on the raw channel bandwidth.[50] So in the PCIe terminology, transfer rate refers to the encoded bit rate: 2.5 GT/s is 2.5 Gbps on the encoded serial link. This corresponds to 2.0 Gbps of pre-coded data or 250 MB/s, which is referred to as throughput in PCIe.

PCI Express 1.1

In 2005, PCI-SIG[51] introduced PCIe 1.1. This updated specification includes clarifications and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate.

PCI Express 2.0

 
A PCI Express 2.0 expansion card that provides USB 3.0 connectivity.[b]

PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007.[52] The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and the per-lane throughput rises from 250 MB/s to 500 MB/s. Consequently, a 16-lane PCIe connector (x16) can support an aggregate throughput of up to 8 GB/s.

PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 work, with the other being v1.1 or v1.0a.

The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture.[53]

Intel's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors (Abit, Asus, Gigabyte) as of 21 October 2007.[54] AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with the MCP72.[55] All of Intel's prior chipsets, including the Intel P35 chipset, supported PCIe 1.1 or 1.0a.[56]

Like 1.x, PCIe 2.0 uses an 8b/10b encoding scheme, therefore delivering, per-lane, an effective 4 Gbit/s max. transfer rate from its 5 GT/s raw data rate.

PCI Express 2.1

PCI Express 2.1 (with its specification dated 4 March 2009) supports a large proportion of the management, support, and troubleshooting systems planned for full implementation in PCI Express 3.0. However, the speed is the same as PCI Express 2.0. The increase in power from the slot breaks backward compatibility between PCI Express 2.1 cards and some older motherboards with 1.0/1.0a, but most motherboards with PCI Express 1.1 connectors are provided with a BIOS update by their manufacturers through utilities to support backward compatibility of cards with PCIe 2.1.

PCI Express 3.0

PCI Express 3.0 Base specification revision 3.0 was made available in November 2010, after multiple delays. In August 2007, PCI-SIG announced that PCI Express 3.0 would carry a bit rate of 8 gigatransfers per second (GT/s), and that it would be backward compatible with existing PCI Express implementations. At that time, it was also announced that the final specification for PCI Express 3.0 would be delayed until Q2 2010.[57] New features for the PCI Express 3.0 specification included a number of optimizations for enhanced signaling and data integrity, including transmitter and receiver equalization, PLL improvements, clock data recovery, and channel enhancements of currently supported topologies.[58]

Following a six-month technical analysis of the feasibility of scaling the PCI Express interconnect bandwidth, PCI-SIG's analysis found that 8 gigatransfers per second could be manufactured in mainstream silicon process technology, and deployed with existing low-cost materials and infrastructure, while maintaining full compatibility (with negligible impact) with the PCI Express protocol stack.

PCI Express 3.0 upgraded the encoding scheme to 128b/130b from the previous 8b/10b encoding, reducing the bandwidth overhead from 20% of PCI Express 2.0 to approximately 1.54% (= 2/130). PCI Express 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane, nearly doubling the lane bandwidth relative to PCI Express 2.0.[48]

On 18 November 2010, the PCI Special Interest Group officially published the finalized PCI Express 3.0 specification to its members to build devices based on this new version of PCI Express.[59]

PCI Express 3.1

In September 2013, PCI Express 3.1 specification was announced for release in late 2013 or early 2014, consolidating various improvements to the published PCI Express 3.0 specification in three areas: power management, performance and functionality.[46][60] It was released in November 2014.[61]

PCI Express 4.0

On 29 November 2011, PCI-SIG preliminarily announced PCI Express 4.0,[62] providing a 16 GT/s bit rate that doubles the bandwidth provided by PCI Express 3.0 to 31.5 GB/s in each direction for a 16-lane configuration, while maintaining backward and forward compatibility in both software support and used mechanical interface.[63] PCI Express 4.0 specs also bring OCuLink-2, an alternative to Thunderbolt. OCuLink version 2 has up to 16 GT/s (16 GB/s total for x8 lanes),[42] while the maximum bandwidth of a Thunderbolt 3 link is 5 GB/s.

In June 2016 Cadence, PLDA and Synopsys demoed PCIe 4.0 physical-layer, controller, switch and other IP blocks at the PCI SIG’s annual developer’s conference.[64]

Mellanox Technologies announced the first 100 Gbit/s network adapter with PCIe 4.0 on 15 June 2016,[65] and the first 200 Gbit/s network adapter with PCIe 4.0 on 10 November 2016.[66]

In August 2016, Synopsys presented a test setup with FPGA clocking a lane to PCIe 4.0 speeds at the Intel Developer Forum. Their IP has been licensed to several firms planning to present their chips and products at the end of 2016.[67]

On the IEEE Hot Chips Symposium in August 2016 IBM announced the first CPU with PCIe 4.0 support, POWER9.[68][69]

PCI-SIG officially announced the release of the final PCI Express 4.0 specification on 8 June 2017.[70] The spec includes improvements in flexibility, scalability, and lower-power.

On 5 December 2017 IBM announced the first system with PCIe 4.0 slots, Power AC922.[71][72]

NETINT Technologies introduced the first NVMe SSD based on PCIe 4.0 on 17 July 2018, ahead of Flash Memory Summit 2018[73]

AMD announced on 9 January 2019 its upcoming Zen 2-based processors and X570 chipset would support PCIe 4.0.[74] AMD had hoped to enable partial support for older chipsets, but instability caused by motherboard traces not conforming to PCIe 4.0 specifications made that impossible.[75][76]

Intel released their first mobile CPUs with PCI express 4.0 support in mid-2020, as a part of the Tiger Lake microarchitecture.[77]

PCI Express 5.0

In June 2017, PCI-SIG announced the PCI Express 5.0 preliminary specification.[70] Bandwidth was expected to increase to 32 GT/s, yielding 63 GB/s in each direction in a 16-lane configuration. The draft spec was expected to be standardized in 2019.[citation needed] Initially, 25.0 GT/s was also considered for technical feasibility.

On 7 June 2017 at PCI-SIG DevCon, Synopsys recorded the first demonstration of PCI Express 5.0 at 32 GT/s.[78]

On 31 May 2018, PLDA announced the availability of their XpressRICH5 PCIe 5.0 Controller IP based on draft 0.7 of the PCIe 5.0 specification on the same day.[79][80]

On 10 December 2018, the PCI SIG released version 0.9 of the PCIe 5.0 specification to its members,[81] and on 17 January 2019, PCI SIG announced the version 0.9 had been ratified, with version 1.0 targeted for release in the first quarter of 2019.[82]

On 29 May 2019, PCI-SIG officially announced the release of the final PCI Express 5.0 specification.[83]

On 20 November 2019, Jiangsu Huacun presented the first PCIe 5.0 Controller HC9001 in a 12 nm manufacturing process.[84] Production started in 2020.

On 17 August 2020, IBM announced the Power10 processor with PCIe 5.0 and up to 32 lanes per single-chip module (SCM) and up to 64 lanes per double-chip module (DCM).[85]

On 9 September 2021, IBM announced the Power E1080 Enterprise server with planned availability date 17 September.[86] It can have up to 16 Power10 SCMs with maximum of 32 slots per system which can act as PCIe 5.0 x8 or PCIe 4.0 x16.[87] Alternatively they can be used as PCIe 5.0 x16 slots for optional optical CXP converter adapters connecting to external PCIe expansion drawers.

On 27 October 2021, Intel announced the 12th Gen Intel Core CPU family, the world's first consumer x86-64 processors with PCIe 5.0 (up to 16 lanes) connectivity.[88]

On 22 March 2022, Nvidia announced Nvidia Hopper GH100 GPU, the world's first PCIe 5.0 GPU.[89]

On 23 May 2022, AMD announced its Zen 4 architecture with support for up to 24 lanes of PCIe 5.0 connectivity on consumer platforms and 128 lanes on server platforms.[90][91]

PCI Express 6.0

On 18 June 2019, PCI-SIG announced the development of PCI Express 6.0 specification. Bandwidth is expected to increase to 64 GT/s, yielding 128 GB/s in each direction in a 16-lane configuration, with a target release date of 2021.[92] The new standard uses 4-level pulse-amplitude modulation (PAM-4) with a low-latency forward error correction (FEC) in place of non-return-to-zero (NRZ) modulation.[93] Unlike previous PCI Express versions, forward error correction is used to increase data integrity and PAM-4 is used as line code so that two bits are transferred per transfer. With 64 GT/s data transfer rate (raw bit rate), up to 121 GB/s in each direction is possible in x16 configuration.[92]

On 24 February 2020, the PCI Express 6.0 revision 0.5 specification (a "first draft" with all architectural aspects and requirements defined) was released.[94]

On 5 November 2020, the PCI Express 6.0 revision 0.7 specification (a "complete draft" with electrical specifications validated via test chips) was released.[95]

On 6 October 2021, the PCI Express 6.0 revision 0.9 specification (a "final draft") was released.[96]

On 11 January 2022, PCI-SIG officially announced the release of the final PCI Express 6.0 specification.[97]

PAM-4 coding results in a vastly higher bit error rate (BER) of 10−6 (vs. 10−12 previously), so in place of 128b/130b encoding, a 3-way interlaced forward error correction (FEC) is used in addition to cyclic redundancy check (CRC). A fixed 256 byte Flow Control Unit (FLIT) block carries 242 bytes of data, which includes variable-sized transaction level packets (TLP) and data link layer payload (DLLP); remaining 14 bytes are reserved for 8-byte CRC and 6-byte FEC. [98][99] 3-way Gray code is used in PAM-4/FLIT mode to reduce error rate; the interface does not switch to NRZ and 128/130b encoding even when retraining to lower data rates.[100][101]

PCI Express 7.0

On 21 June 2022, PCI-SIG announced the development of PCI Express 7.0 specification.[102] It will deliver 128 GT/s raw bit rate and up to 242 GB/s per direction in x16 configuration, using the same PAM4 signaling as version 6.0. Doubling of the data rate will be achieved by fine-tuning channel parameters to decrease signal losses and improve power efficiency. The specification is expected to be finalised in 2025.

Extensions and future directions

Some vendors offer PCIe over fiber products,[103][104][105] with active optical cables (AOC) for PCIe switching at increased distance in PCIe expansion drawers,[106][87] or in specific cases where transparent PCIe bridging is preferable to using a more mainstream standard (such as InfiniBand or Ethernet) that may require additional software to support it.

Thunderbolt was co-developed by Intel and Apple as a general-purpose high speed interface combining a logical PCIe link with DisplayPort and was originally intended as an all-fiber interface, but due to early difficulties in creating a consumer-friendly fiber interconnect, nearly all implementations are copper systems. A notable exception, the Sony VAIO Z VPC-Z2, uses a nonstandard USB port with an optical component to connect to an outboard PCIe display adapter. Apple has been the primary driver of Thunderbolt adoption through 2011, though several other vendors[107] have announced new products and systems featuring Thunderbolt. Thunderbolt 3 forms the basis of the USB4 standard.

Mobile PCIe specification (abbreviated to M-PCIe) allows PCI Express architecture to operate over the MIPI Alliance's M-PHY physical layer technology. Building on top of already existing widespread adoption of M-PHY and its low-power design, Mobile PCIe lets mobile devices use PCI Express.[108]

Draft process

There are 5 primary releases/checkpoints in a PCI-SIG specification:[109]

  • Draft 0.3 (Concept): this release may have few details, but outlines the general approach and goals.
  • Draft 0.5 (First draft): this release has a complete set of architectural requirements and must fully address the goals set out in the 0.3 draft.
  • Draft 0.7 (Complete draft): this release must have a complete set of functional requirements and methods defined, and no new functionality may be added to the specification after this release. Before the release of this draft, electrical specifications must have been validated via test silicon.
  • Draft 0.9 (Final draft): this release allows PCI-SIG member companies to perform an internal review for intellectual property, and no functional changes are permitted after this draft.
  • 1.0 (Final release): this is the final and definitive specification, and any changes or enhancements are through Errata documentation and Engineering Change Notices (ECNs) respectively.

Historically, the earliest adopters of a new PCIe specification generally begin designing with the Draft 0.5 as they can confidently build up their application logic around the new bandwidth definition and often even start developing for any new protocol features. At the Draft 0.5 stage, however, there is still a strong likelihood of changes in the actual PCIe protocol layer implementation, so designers responsible for developing these blocks internally may be more hesitant to begin work than those using interface IP from external sources.

Hardware protocol summary

The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel bus.

PCI Express is a layered protocol, consisting of a transaction layer, a data link layer, and a physical layer. The Data Link Layer is subdivided to include a media access control (MAC) sublayer. The Physical Layer is subdivided into logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms are borrowed from the IEEE 802 networking protocol model.

Physical layer

Connector pins and lengths
Lanes Pins Length
Total Variable Total Variable
0x1 2×18 = 036[110] 07 = 014 25 mm 07.65 mm
0x4 2×32 = 064 2×21 = 042 39 mm 21.65 mm
0x8 2×49 = 098 2×38 = 076 56 mm 38.65 mm
x16 2×82 = 164 2×71 = 142 89 mm 71.65 mm
 
An open-end PCI Express x1 connector lets longer cards that use more lanes be plugged while operating at x1 speeds

The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification published by Intel, the PHY Interface for PCI Express (PIPE),[111] defines the MAC/PCS functional partitioning and the interface between these two sub-layers. The PIPE specification also identifies the physical media attachment (PMA) layer, which includes the serializer/deserializer (SerDes) and other analog circuitry; however, since SerDes implementations vary greatly among ASIC vendors, PIPE does not specify an interface between the PCS and PMA.

At the electrical level, each lane consists of two unidirectional differential pairs operating at 2.5, 5, 8, 16 or 32 Gbit/s, depending on the negotiated capabilities. Transmit and receive are separate differential pairs, for a total of four data wires per lane.

A connection between any two PCIe devices is known as a link, and is built up from a collection of one or more lanes. All devices must minimally support single-lane (x1) link. Devices may optionally support wider links composed of up to 32 lanes.[112][113] This allows for very good compatibility in two ways:

  • A PCIe card physically fits (and works correctly) in any slot that is at least as large as it is (e.g., an x1 sized card works in any sized slot);
  • A slot of a large physical size (e.g., x16) can be wired electrically with fewer lanes (e.g., x1, x4, x8, or x12) as long as it provides the ground connections required by the larger physical slot size.

In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and BIOS versions are verified to support x1, x4, x8 and x16 connectivity on the same connection.

The width of a PCIe connector is 8.8 mm, while the height is 11.25 mm, and the length is variable. The fixed section of the connector is 11.65 mm in length and contains two rows of 11 pins each (22 pins total), while the length of the other section is variable depending on the number of lanes. The pins are spaced at 1 mm intervals, and the thickness of the card going into the connector is 1.6 mm.[114][115]

Data transmission

PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can never be blocked, so latency is still comparable to conventional PCI, which has dedicated interrupt lines. When the problem of IRQ sharing of pin based interrupts is taken into account and the fact that message signaled interrupts (MSI) can bypass an I/O APIC and be delivered to the CPU directly, MSI performance ends up being substantially better. [116]

Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as data striping. While requiring significant hardware complexity to synchronize (or deskew) the incoming striped data, striping can significantly reduce the latency of the nth byte on a link. While the lanes are not tightly synchronized, there is a limit to the lane to lane skew of 20/8/6 ns for 2.5/5/8 GT/s so the hardware buffers can re-align the striped data.[117] Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.

As with other high data rate serial transmission protocols, the clock is embedded in the signal. At the physical level, PCI Express 2.0 utilizes the 8b/10b encoding scheme[48] (line code) to ensure that strings of consecutive identical digits (zeros or ones) are limited in length. This coding was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 instead uses 128b/130b encoding (1.54% overhead). Line encoding limits the run length of identical-digit strings in data streams and ensures the receiver stays synchronised to the transmitter via clock recovery.

A desirable balance (and therefore spectral density) of 0 and 1 bits in the data stream is achieved by XORing a known binary polynomial as a "scrambler" to the data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by applying the XOR a second time. Both the scrambling and descrambling steps are carried out in hardware.

Data link layer

The data link layer performs three vital services for the PCIe link:

  1. sequence the transaction layer packets (TLPs) that are generated by the transaction layer,
  2. ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol (ACK and NAK signaling) that explicitly requires replay of unacknowledged/bad TLPs,
  3. initialize and manage flow control credits

On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit cyclic redundancy check code (known in this context as Link CRC or LCRC) is also appended to the end of each outgoing TLP.

On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number (which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with past sequence-numbers.)

If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium.

In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and consumes data link layer packets (DLLPs). ACK and NAK signals are communicated via DLLPs, as are some power management messages and flow control credit information (on behalf of the transaction layer).

In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the transmitter's replay buffer (which must store a copy of all transmitted TLPs until the remote receiver ACKs them), and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs.

Transaction layer

PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response.

PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each received buffer in its transaction layer. The device at the opposite end of the link, when sending transactions to this device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes.

PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5 gigabaud) divided by the encoding overhead (10 bits per byte). This means a sixteen lane (x16) PCIe card would then be theoretically capable of 16x250 MB/s = 4 GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels.

Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (x2, x4, etc.) But in more typical applications (such as a USB or Ethernet controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgements.[118] This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and forced interrupts (either in the device's host interface or the PC's CPU). Being a protocol for devices connected to the same printed circuit board, it does not require the same tolerance for transmission errors as a protocol for communication over longer distances, and thus, this loss of efficiency is not particular to PCIe.

Efficiency of the link

As for any "network like" communication links, some of the "raw" bandwidth is consumed by protocol overhead:[119]

A PCIe 1.x lane for example offers a data rate on top of the physical layer of 250 MB/s (simplex). This isn't the payload bandwidth but the physical layer bandwidth – a PCIe lane has to carry additional information for full functionality.[119]

Gen 2 Transaction Layer Packet[119]: 3 
Layer PHY Data Link Layer Transaction Data Link Layer PHY
Data Start Sequence Header Payload ECRC LCRC End
Size (Bytes) 1 2 12 or 16 0 to 4096 4 (optional) 4 1

The Gen2 overhead is then 20, 24, or 28 bytes per transaction.[clarification needed][citation needed]

Gen 3 Transaction Layer Packet[119]: 3 
Layer G3 PHY Data Link Layer Transaction Layer Data Link Layer
Data Start Sequence Header Payload ECRC LCRC
Size (Bytes) 4 2 12 or 16 0 to 4096 4 (optional) 4

The Gen3 overhead is then 22, 26 or 30 bytes per transaction.[clarification needed][citation needed]

The   for a 128 byte payload is 86%, and 98% for a 1024 byte payload. For small accesses like register settings (4 bytes), the efficiency drops as low as 16%.[citation needed]

The maximum payload size (MPS) is set on all devices based on smallest maximum on any device in the chain. If one device has an MPS of 128 bytes, all devices of the tree must set their MPS to 128 bytes. In this case the bus will have a peak efficiency of 86% for writes.[119]: 3 

Applications

 
Asus Nvidia GeForce GTX 650 Ti, a PCI Express 3.0 x16 graphics card
 
The Nvidia GeForce GTX 1070, a PCI Express 3.0 x16 Graphics card
 
Intel 82574L Gigabit Ethernet NIC, a PCI Express x1 card
 
A Marvell-based SATA 3.0 controller, as a PCI Express x1 card

PCI Express operates in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an expansion card interface for add-in boards.

In virtually all modern (as of 2012) PCs, from consumer laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system-processor with both integrated peripherals (surface-mounted ICs) and add-on peripherals (expansion cards). In most of these systems, the PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI peripherals.

As of 2013, PCI Express has replaced AGP as the default interface for graphics cards on new systems. Almost all models of graphics cards released since 2010 by AMD (ATI) and Nvidia use PCI Express. Nvidia uses the high-bandwidth data transfer of PCIe for its Scalable Link Interface (SLI) technology, which allows multiple graphics cards of the same chipset and model number to run in tandem, allowing increased performance.[citation needed] AMD has also developed a multi-GPU system based on PCIe called CrossFire.[citation needed] AMD, Nvidia, and Intel have released motherboard chipsets that support as many as four PCIe x16 slots, allowing tri-GPU and quad-GPU card configurations.

External GPUs

Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with a power supply and cooling); this is possible with an ExpressCard or Thunderbolt interface. An ExpressCard interface provides bit rates of 5 Gbit/s (0.5 GB/s throughput), whereas a Thunderbolt interface provides bit rates of up to 40 Gbit/s (5 GB/s throughput).

In 2006, Nvidia developed the Quadro Plex external PCIe family of GPUs that can be used for advanced graphic applications for the professional market.[120] These video cards require a PCI Express x8 or x16 slot for the host-side card, which connects to the Plex via a VHDCI carrying eight PCIe lanes.[121]

In 2008, AMD announced the ATI XGP technology, based on a proprietary cabling system that is compatible with PCIe x8 signal transmissions.[122] This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter.[123] Around 2010 Acer launched the Dynavivid graphics dock for XGP.[124]

In 2010, external card hubs were introduced that can connect to a laptop or desktop through a PCI ExpressCard slot. These hubs can accept full-sized graphics cards. Examples include MSI GUS,[125] Village Instrument's ViDock,[126] the Asus XG Station, Bplus PE4H V3.2 adapter,[127] as well as more improvised DIY devices.[128] However such solutions are limited by the size (often only x1) and version of the available PCIe slot on a laptop.

The Intel Thunderbolt interface has provided a new option to connect with a PCIe card externally. Magma has released the ExpressBox 3T, which can hold up to three PCIe cards (two at x8 and one at x4).[129] MSI also released the Thunderbolt GUS II, a PCIe chassis dedicated for video cards.[130] Other products such as the Sonnet's Echo Express[131] and mLogic's mLink are Thunderbolt PCIe chassis in a smaller form factor.[132]

In 2017, more fully featured external card hubs were introduced, such as the Razer Core, which has a full-length PCIe x16 interface.[133]

Storage devices

 
An OCZ RevoDrive SSD, a full-height x4 PCI Express card

The PCI Express protocol can be used as data interface to flash memory devices, such as memory cards and solid-state drives (SSDs).

The XQD card is a memory card format utilizing PCI Express, developed by the CompactFlash Association, with transfer rates of up to 1GB/s.[134]

Many high-performance, enterprise-class SSDs are designed as PCI Express RAID controller cards.[citation needed] Before NVMe was standardized, many of these cards utilized proprietary interfaces and custom drivers to communicate with the operating system; they had much higher transfer rates (over 1 GB/s) and IOPS (over one million I/O operations per second) when compared to Serial ATA or SAS drives.[quantify][135][136] For example, in 2011 OCZ and Marvell co-developed a native PCI Express solid-state drive controller for a PCI Express 3.0 x16 slot with maximum capacity of 12 TB and a performance of to 7.2 GB/s sequential transfers and up to 2.52 million IOPS in random transfers.[137][relevant?]

SATA Express was an interface for connecting SSDs through SATA-compatible ports, optionally providing multiple PCI Express lanes as a pure PCI Express connection to the attached storage device.[138] M.2 is a specification for internally mounted computer expansion cards and associated connectors, which also uses multiple PCI Express lanes.[139]

PCI Express storage devices can implement both AHCI logical interface for backward compatibility, and NVM Express logical interface for much faster I/O operations provided by utilizing internal parallelism offered by such devices. Enterprise-class SSDs can also implement SCSI over PCI Express.[140]

Cluster interconnect

Certain data-center applications (such as large computer clusters) require the use of fiber-optic interconnects due to the distance limitations inherent in copper cabling. Typically, a network-oriented standard such as Ethernet or Fibre Channel suffices for these applications, but in some cases the overhead introduced by routable protocols is undesirable and a lower-level interconnect, such as InfiniBand, RapidIO, or NUMAlink is needed. Local-bus standards such as PCIe and HyperTransport can in principle be used for this purpose,[141] but as of 2015, solutions are only available from niche vendors such as Dolphin ICS, and TTTech Auto.

Competing protocols

Other communications standards based on high bandwidth serial architectures include InfiniBand, RapidIO, HyperTransport, Intel QuickPath Interconnect, and the Mobile Industry Processor Interface (MIPI). The differences are based on the trade-offs between flexibility and extensibility vs latency and overhead. For example, making the system hot-pluggable, as with Infiniband but not PCI Express, requires that software track network topology changes.[citation needed]

Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.[citation needed]

PCI Express falls somewhere in the middle, targeted by design as a system interconnect (local bus) rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat.[citation needed]

Delays in PCIe 4.0 implementations led to the Gen-Z consortium, the CCIX effort and an open Coherent Accelerator Processor Interface (CAPI) all being announced by the end of 2016.[142]

On 11 March 2019, Intel presented Compute Express Link (CXL), a new interconnect bus, based on the PCI Express 5.0 physical layer infrastructure. The initial promoters of the CXL specification included: Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel and Microsoft.[143]

Integrators list

The PCI-SIG Integrators List lists products made by PCI-SIG member companies that have passed compliance testing. The list include switches, bridges, NICs, SSDs, etc.[144]

See also

Notes

  1. ^ Switches can create multiple endpoints out of one to allow sharing it with multiple devices.
  2. ^ The card's Serial ATA power connector is present because the USB 3.0 ports require more power than the PCI Express bus can supply. More often, a 4-pin Molex power connector is used.

References

  1. ^ Mayhew, D.; Krishnan, V. (August 2003). "PCI express and advanced switching: Evolutionary path to building next generation interconnects". 11th Symposium on High Performance Interconnects, 2003. Proceedings. pp. 21–29. doi:10.1109/CONECT.2003.1231473. ISBN 0-7695-2012-X. S2CID 7456382.
  2. ^ "Definition of PCI Express". PCMag.
  3. ^ Zhang, Yanmin; Nguyen, T Long (June 2007). (PDF). Proceedings of the Linux Symposium. Fedora project. Archived from the original (PDF) on 10 March 2016. Retrieved 8 May 2012.
  4. ^ https://www.hyperstone.com Flash Memory Form Factors – The Fundamentals of Reliable Flash Storage, Retrieved 19 April 2018
  5. ^ a b c Ravi Budruk (21 August 2007). . PCI-SIG. Archived from the original (PDF) on 15 July 2014. Retrieved 15 July 2014.
  6. ^ "What are PCIe Slots and Their Uses". PC Guide 101. 18 May 2021. Retrieved 21 June 2021.
  7. ^ "How PCI Express Works". How Stuff Works. 17 August 2005. from the original on 3 December 2009. Retrieved 7 December 2009.
  8. ^ "4.2.4.9. Link Width and Lane Sequence Negotiation", PCI Express Base Specification, Revision 2.1., 4 March 2009
  9. ^ a b c . PCI-SIG. Archived from the original on 13 November 2008. Retrieved 23 November 2008.
  10. ^ . Interface bus. Archived from the original on 8 December 2007. Retrieved 12 June 2010.
  11. ^ 32 lanes are defined by the PCIe Base Specification up to PCIe 5.0 but there's no card standard in the PCIe Card Electromechanical Specification and that lane number was never implemented.
  12. ^ . Developer Zone. National Instruments. 13 August 2009. Archived from the original on 5 January 2010. Retrieved 7 December 2009.
  13. ^ Qazi, Atif. "What are PCIe Slots?". PC Gear Lab. Retrieved 8 April 2020.
  14. ^ "New PCIe Form Factor Enables Greater PCIe SSD Adoption". NVM Express. 12 June 2012. from the original on 6 September 2015.
  15. ^ "Memblaze PBlaze4 AIC NVMe SSD Review". StorageReview. 21 December 2015.
  16. ^ July 2015, Kane Fulton 20 (20 July 2015). "19 graphics cards that shaped the future of gaming". TechRadar.
  17. ^ Leadbetter, Richard (16 September 2020). "Nvidia GeForce RTX 3080 review: welcome to the next level". Eurogamer.
  18. ^ "Sapphire Radeon RX 5700 XT Pulse Review | bit-tech.net". bit-tech.net. Retrieved 26 August 2019.
  19. ^ "AMD Radeon™ RX 5700 XT 8GB GDDR6 THICC II – RX-57XT8DFD6". xfxforce.com. Retrieved 25 August 2019.
  20. ^ "ROG Strix GeForce RTX 3080 OC Edition 10GB GDDR6X | Graphics Cards". rog.asus.com.
  21. ^ . Frequently Asked Questions. Adex Electronics. 1998. Archived from the original on 2 November 2011. Retrieved 24 October 2011.
  22. ^ a b PCI Express Card Electromechanical Specification Revision 2.0
  23. ^ "PCI Express Card Electromechanical Specification Revision 4.0, Version 1.0 (Clean)".
  24. ^ "L1 PM Substates with CLKREQ, Revision 1.0a" (PDF). PCI-SIG. Retrieved 8 November 2018.
  25. ^ (PDF). PCI-SIG. Archived from the original (PDF) on 9 November 2018. Retrieved 8 November 2018.
  26. ^ PCI Express Card Electromechanical Specification Revision 1.1
  27. ^ Schoenborn, Zale (2004), Board Design Guidelines for PCI Express Architecture (PDF), PCI-SIG, pp. 19–21, (PDF) from the original on 27 March 2016
  28. ^ PCI Express Base Specification, Revision 1.1 Page 332
  29. ^ "Where Does PCIe Cable Go?". 16 January 2022. Retrieved 10 June 2022.
  30. ^ a b "Mini-Fit® PCI Express®* Wire to Board Connector System" (PDF). Retrieved 4 December 2020.
  31. ^ PCI Express x16 Graphics 150W-ATX Specification Revision 1.0
  32. ^ PCI Express 225 W/300 W High Power Card Electromechanical Specification Revision 1.0
  33. ^ PCI Express Card Electromechanical Specification Revision 3.0
  34. ^ Yun Ling (16 May 2008). . Archived from the original on 5 November 2015. Retrieved 7 November 2015.
  35. ^ "MP1: Mini PCI Express / PCI Express Adapter". hwtools.net. 18 July 2014. from the original on 3 October 2014. Retrieved 28 September 2014.
  36. ^ "mSATA FAQ: A Basic Primer". Notebook review. from the original on 12 February 2012.
  37. ^ "Eee PC Research". ivc (wiki). from the original on 30 March 2010. Retrieved 26 October 2009.
  38. ^ "Desktop Board Solid-state drive (SSD) compatibility". Intel. from the original on 2 January 2016.
  39. ^ "How to distinguish the differences between M.2 cards | Dell US". www.dell.com. Retrieved 24 March 2020.
  40. ^ "PCI Express External Cabling 1.0 Specification". from the original on 10 February 2007. Retrieved 9 February 2007.
  41. ^ . PCI SIG. 7 February 2007. Archived from the original on 26 November 2013. Retrieved 7 December 2012.
  42. ^ a b c . www.connectortips.com. Archived from the original on 13 March 2017.
  43. ^ Mokosiy, Vitaliy (9 October 2020). "Untangling terms: M.2, NVMe, USB-C, SAS, PCIe, U.2, OCuLink". Medium. Retrieved 26 March 2021.
  44. ^ "Supermicro Universal I/O (UIO) Solutions". Supermicro.com. from the original on 24 March 2014. Retrieved 24 March 2014.
  45. ^ "Get ready for M-PCIe testing", PC board design, EDN
  46. ^ a b "PCI SIG discusses M‐PCIe oculink & 4th gen PCIe", The Register, UK, 13 September 2013, from the original on 29 June 2017
  47. ^ . pcisig.com. PCI-SIG. Archived from the original on 18 May 2014. Retrieved 18 May 2014.
  48. ^ a b c . pcisig.com. PCI-SIG. Archived from the original on 1 February 2014. Retrieved 1 May 2014.
  49. ^ "What does GT/s mean, anyway?". TM World. from the original on 14 August 2012. Retrieved 7 December 2012.
  50. ^ . SE: Eiscat. Archived from the original on 17 August 2010. Retrieved 7 December 2012.
  51. ^ PCI SIG, from the original on 6 July 2008
  52. ^ (PDF) (Press release). PCI-SIG. 15 January 2007. Archived from the original (PDF) on 4 March 2007. Retrieved 9 February 2007. — note that in this press release the term aggregate bandwidth refers to the sum of incoming and outgoing bandwidth; using this terminology the aggregate bandwidth of full duplex 100BASE-TX is 200 Mbit/s.
  53. ^ Smith, Tony (11 October 2006). "PCI Express 2.0 final draft spec published". The Register. from the original on 29 January 2007. Retrieved 9 February 2007.
  54. ^ Key, Gary; Fink, Wesley (21 May 2007). "Intel P35: Intel's Mainstream Chipset Grows Up". AnandTech. from the original on 23 May 2007. Retrieved 21 May 2007.
  55. ^ Huynh, Anh (8 February 2007). . AnandTech. Archived from the original on 10 February 2007. Retrieved 9 February 2007.
  56. ^ "Intel P35 Express Chipset Product Brief" (PDF). Intel. (PDF) from the original on 26 September 2007. Retrieved 5 September 2007.
  57. ^ Hachman, Mark (5 August 2009). "PCI Express 3.0 Spec Pushed Out to 2010". PC Mag. from the original on 7 January 2014. Retrieved 7 December 2012.
  58. ^ "PCI Express 3.0 Bandwidth: 8.0 Gigatransfers/s". ExtremeTech. 9 August 2007. from the original on 24 October 2007. Retrieved 5 September 2007.
  59. ^ . X bit labs. 18 November 2010. Archived from the original on 21 November 2010. Retrieved 18 November 2010.
  60. ^ "PCIe 3.1 and 4.0 Specifications Revealed". eteknix.com. July 2013. from the original on 1 February 2016.
  61. ^ "Trick or Treat… PCI Express 3.1 Released!". synopsys.com. from the original on 23 March 2015.
  62. ^ (press release). PCI-SIG. 29 November 2011. Archived from the original on 23 December 2012. Retrieved 7 December 2012.
  63. ^ . pcisig.com. Archived from the original on 20 October 2016.
  64. ^ "PCIe 4.0 Heads to Fab, 5.0 to Lab". EE Times. 26 June 2016. from the original on 28 August 2016. Retrieved 27 August 2016.
  65. ^ "Mellanox Announces ConnectX-5, the Next Generation of 100G InfiniBand and Ethernet Smart Interconnect Adapter | NVIDIA". www.mellanox.com.
  66. ^ "Mellanox Announces 200Gb/s HDR InfiniBand Solutions Enabling Record Levels of Performance and Scalability | NVIDIA". www.mellanox.com.
  67. ^ "IDF: PCIe 4.0 läuft, PCIe 5.0 in Arbeit". Heise Online (in German). 18 August 2016. from the original on 19 August 2016. Retrieved 18 August 2016.
  68. ^ Brian Thompto, POWER9 Processor for the Cognitive Era
  69. ^ 2016 IEEE Hot Chips 28 Symposium (HCS), 21–23 Aug. 2016
  70. ^ a b Born, Eric (8 June 2017). "PCIe 4.0 specification finally out with 16 GT/s on tap". Tech Report. from the original on 8 June 2017. Retrieved 8 June 2017.
  71. ^ "IBM Unveils Most Advanced Server for AI". www-03.ibm.com. 5 December 2017.
  72. ^ IBM Power System AC922 (8335-GTG) server helps you to harness breakthrough accelerated AI, HPDA, and HPC performance for faster time to insight, IBM Europe Hardware Announcement ZG17-0147
  73. ^ "NETINT Introduces Codensity with Support for PCIe 4.0 – NETINT Technologies". NETINT Technologies. 17 July 2018. Retrieved 28 September 2018.
  74. ^ Mujtaba, Hassan (9 January 2019). "AMD Ryzen 3000 Series CPUs Based on Zen 2 Launching in Mid of 2019".
  75. ^ Alcorn, Paul (3 June 2019). "AMD Nixes PCIe 4.0 Support on Older Socket AM4 Motherboards, Here's Why". Tom's Hardware. Archived from the original on 10 June 2019. Retrieved 10 June 2019.
  76. ^ Alcorn, Paul (10 January 2019). "PCIe 4.0 May Come to all AMD Socket AM4 Motherboards (Updated)". Tom's Hardware. Archived from the original on 10 June 2019. Retrieved 10 June 2019.
  77. ^ Cutress, Dr. Ian (13 August 2020). "Tiger Lake IO and Power". Anandtech.
  78. ^ "1,2,3,4,5... It's Official, PCIe 5.0 is Announced | synopsys.com". www.synopsys.com. Retrieved 7 June 2017.{{cite web}}: CS1 maint: url-status (link)
  79. ^ "PLDA Announces Availability of XpressRICH5™ PCIe 5.0 Controller IP | PLDA.com". www.plda.com. Retrieved 28 June 2018.
  80. ^ "XpressRICH5 for ASIC | PLDA.com". www.plda.com. Retrieved 28 June 2018.
  81. ^ "Doubling Bandwidth in Under Two Years: PCI Express® Base Specification Revision 5.0, Version 0.9 is Now Available to Members". pcisig.com. Retrieved 12 December 2018.
  82. ^ "PCIe 5.0 Is Ready For Prime Time". tomshardware.com. 17 January 2019. Retrieved 18 January 2019.
  83. ^ "PCI-SIG® Achieves 32GT/s with New PCI Express® 5.0 Specification". www.businesswire.com. 29 May 2019.
  84. ^ "PCI-Express 5.0: China stellt ersten Controller vor". PC Games Hardware. 18 November 2019.
  85. ^ IBM’s POWER10 Processor, Hot Chips 32, August 16–18, 2020
  86. ^ Power E1080 Enterprise server delivers a uniquely architected platform to help securely and efficiently scale core operational and AI applications in a hybrid cloud, IBM Europe Hardware Announcement ZG21-0059
  87. ^ a b IBM Power E1080 Technical Overview and Introduction
  88. ^ "Intel Unveils 12th Gen Intel Core, Launches World's Best Gaming". Intel.com. Retrieved 16 February 2022.
  89. ^ "NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing".
  90. ^ "AMD Showcases Industry-Leading Gaming, Commercial, and Mainstream PC Technologies at COMPUTEX 2022". AMD.com. Retrieved 23 May 2022.
  91. ^ "4th Gen AMD EPYC™ Processor Architecture". AMD.com. Retrieved 12 November 2022.
  92. ^ a b "PCI-SIG® Announces Upcoming PCI Express® 6.0 Specification to Reach 64 GT/s". www.businesswire.com. 18 June 2019.
  93. ^ Smith, Ryan. "PCI Express Bandwidth to Be Doubled Again: PCIe 6.0 Announced, Spec to Land in 2021". www.anandtech.com.
  94. ^ "PCI Express 6.0 Reaches Version 0.5 Ahead Of Finalization Next Year – Phoronix". www.phoronix.com.
  95. ^ Shilov, Anton (4 November 2020). "PCIe 6.0 Specification Hits Milestone: Complete Draft Is Ready". Tom's Hardware.
  96. ^ Yanes, Al. "PCIe® 6.0 Specification, Version 0.9: One Step Closer to Final Release | PCI-SIG". pcisig.com. Retrieved 6 October 2021.
  97. ^ "PCI-SIG® Releases PCIe® 6.0 Specification Delivering Record Performance to Power Big Data Applications". Business Wire. 11 January 2022. Retrieved 16 February 2022.
  98. ^ "The Evolution of the PCI Express Specification: On its Sixth Generation, Third Decade and Still Going Strong". Pci-Sig. 11 January 2022. Retrieved 16 February 2022.
  99. ^ Debendra Das Sharma. "PCIe 6.0 Specification: The Interconnect for I/O Needs of the Future". PCI-SIG. p. 8. Archived from the original on 30 October 2021.
  100. ^ "Pushing the Envelope with PCIe 6.0: Bringing PAM4 to PCIe" (PDF). Retrieved 16 February 2022.
  101. ^ "PowerPoint Presentation" (PDF). Retrieved 16 February 2022.
  102. ^ "PCI-SIG® Announces PCI Express® 7.0 Specification to Reach 128 GT/s". Business Wire. 21 June 2022. Retrieved 25 June 2022.
  103. ^ "PLX demo shows PCIe over fiber as data center clustering interconnect". Cabling install. Penn Well. Retrieved 29 August 2012.
  104. ^ "Introduced second generation PCI Express Gen 2 over fiber optic systems". Adnaco. 22 April 2011. from the original on 4 October 2012. Retrieved 29 August 2012.
  105. ^ "PCIe Active Optical Cable System". from the original on 30 December 2014. Retrieved 23 October 2015.
  106. ^ IBM Power Systems E870 and E880 Technical Overview and Introduction
  107. ^ "Acer, Asus to Bring Intel's Thunderbolt Speed Technology to Windows PCs". PC World. 14 September 2011. from the original on 18 January 2012. Retrieved 7 December 2012.
  108. ^ Kevin Parrish (28 June 2013). "PCIe for Mobile Launched; PCIe 3.1, 4.0 Specs Revealed". Tom's Hardware. Retrieved 10 July 2014.
  109. ^ "PCI Express 4.0 Draft 0.7 & PIPE 4.4 Specifications – What Do They Mean to Designers? — Synopsys Technical Article | ChipEstimate.com". www.chipestimate.com. Retrieved 28 June 2018.
  110. ^ "PCI Express 1x, 4x, 8x, 16x bus pinout and wiring @". RU: Pinouts. from the original on 25 November 2009. Retrieved 7 December 2009.
  111. ^ (PDF) (version 2.00 ed.). Intel. Archived from the original (PDF) on 17 March 2008. Retrieved 21 May 2008.
  112. ^ PCI Express System Architecture
  113. ^ PCI Express Architecture, intel.com
  114. ^ "Mechanical Drawing for PCI Express Connector". Interface bus. Retrieved 7 December 2007.
  115. ^ "FCi schematic for PCIe connectors" (PDF). FCI connect. Retrieved 7 December 2007.
  116. ^ Reducing Interrupt Latency Through the Use of Message Signaled Interrupts
  117. ^ PCI Express Base Specification, Revision 3.0 Table 4-24
  118. ^ "Computer Peripherals And Interfaces". Technical Publications Pune. from the original on 25 February 2014. Retrieved 23 July 2009.
  119. ^ a b c d e Lawley, Jason (28 October 2014). "Understanding Performance of PCI Express Systems" (PDF). 1.2. Xilinx.{{cite web}}: CS1 maint: url-status (link)
  120. ^ "NVIDIA Introduces NVIDIA Quadro® Plex – A Quantum Leap in Visual Computing". Nvidia. 1 August 2006. from the original on 24 August 2006. Retrieved 14 July 2018.
  121. ^ "Quadro Plex VCS – Advanced visualization and remote graphics". nVidia. from the original on 28 April 2011. Retrieved 11 September 2010.
  122. ^ . ATI. AMD. Archived from the original on 29 January 2010. Retrieved 11 September 2010.
  123. ^ Fujitsu-Siemens Amilo GraphicBooster External Laptop GPU Released, 3 December 2008, from the original on 16 October 2015, retrieved 9 August 2015
  124. ^ DynaVivid Graphics Dock from Acer arrives in France, what about the US?, 11 August 2010, from the original on 16 October 2015, retrieved 9 August 2015
  125. ^ Dougherty, Steve (22 May 2010), "MSI to showcase 'GUS' external graphics solution for laptops at Computex", TweakTown
  126. ^ Hellstrom, Jerry (9 August 2011), "ExpressCard trying to pull a (not so) fast one?", PC Perspective (editorial), from the original on 1 February 2016
  127. ^ "PE4H V3.2 (PCIe x16 Adapter)". Hwtools.net. from the original on 14 February 2014. Retrieved 5 February 2014.
  128. ^ O'Brien, Kevin (8 September 2010), "How to Upgrade Your Notebook Graphics Card Using DIY ViDOCK", Notebook review, from the original on 13 December 2013
  129. ^ Lal Shimpi, Anand (7 September 2011), "The Thunderbolt Devices Trickle In: Magma's ExpressBox 3T", AnandTech, from the original on 4 March 2016
  130. ^ "MSI GUS II external GPU enclosure with Thunderbolt". The Verge (hands-on). 10 January 2012. from the original on 13 February 2012. Retrieved 12 February 2012.
  131. ^ "PCI express graphics, Thunderbolt", Tom’s hardware, 17 September 2012
  132. ^ "M logics M link Thunderbold chassis no shipping", Engadget, 13 December 2012, from the original on 25 June 2017
  133. ^ Burns, Chris (17 October 2017), "2017 Razer Blade Stealth and Core V2 detailed", SlashGear, from the original on 17 October 2017
  134. ^ "CompactFlash Association readies next-gen XQD format, promises write speeds of 125 MB/s and up". Engadget. 8 December 2011. from the original on 19 May 2014. Retrieved 18 May 2014.
  135. ^ Zsolt Kerekes (December 2011). "What's so very different about the design of Fusion-io's ioDrives / PCIe SSDs?". storagesearch.com. from the original on 23 September 2013. Retrieved 2 October 2013.
  136. ^ . storagereview.com. 16 July 2012. Archived from the original on 4 October 2013. Retrieved 2 October 2013.
  137. ^ . X-bit labs. Archived from the original on 25 March 2013. Retrieved 7 December 2012.
  138. ^ "Enabling Higher Speed Storage Applications with SATA Express". SATA-IO. from the original on 27 November 2012. Retrieved 7 December 2012.
  139. ^ "SATA M.2 Card". SATA-IO. from the original on 3 October 2013. Retrieved 14 September 2013.
  140. ^ . SCSI Trade Association. Archived from the original on 27 January 2013. Retrieved 27 December 2012.
  141. ^ Meduri, Vijay (24 January 2011). "A Case for PCI Express as a High-Performance Cluster Interconnect". HPCwire. from the original on 14 January 2013. Retrieved 7 December 2012.
  142. ^ Evan Koblentz (3 February 2017). "New PCI Express 4.0 delay may empower next-gen alternatives". Tech Republic. from the original on 1 April 2017. Retrieved 31 March 2017.
  143. ^ Cutress, Ian. "CXL Specification 1.0 Released: New Industry High-Speed Interconnect From Intel". www.anandtech.com. Retrieved 9 August 2019.
  144. ^ "Integrators List | PCI-SIG". pcisig.com. Retrieved 27 March 2019.

Further reading

  • Budruk, Ravi; Anderson, Don; Shanley, Tom (2003), Winkles, Joseph ‘Joe’ (ed.), PCI Express System Architecture, Mind share PC system architecture, Addison-Wesley, ISBN 978-0-321-15630-3, 1120 pp.
  • Solari, Edward; Congdon, Brad (2003), Complete PCI Express Reference: Design Implications for Hardware and Software Developers, Intel, ISBN 978-0-9717861-9-6, 1056 pp.
  • Wilen, Adam; Schade, Justin P; Thornburg, Ron (April 2003), Introduction to PCI Express: A Hardware and Software Developer's Guide, Intel, ISBN 978-0-9702846-9-3, 325 pp.

External links

  •   Media related to PCIe at Wikimedia Commons
  • PCI-SIG Specifications

express, confused, with, ucie, engineering, procurement, construction, installation, epci, peripheral, component, interconnect, express, officially, abbreviated, pcie, high, speed, serial, computer, expansion, standard, designed, replace, older, standards, com. Not to be confused with PCI X or UCIe For Engineering Procurement Construction and Installation see EPCI PCI Express Peripheral Component Interconnect Express officially abbreviated as PCIe or PCI e 1 is a high speed serial computer expansion bus standard designed to replace the older PCI PCI X and AGP bus standards It is the common motherboard interface for personal computers graphics cards sound cards hard disk drive host adapters SSDs Wi Fi and Ethernet hardware connections 2 PCIe has numerous improvements over the older standards including higher maximum system bus throughput lower I O pin count and smaller physical footprint better performance scaling for bus devices a more detailed error detection and reporting mechanism Advanced Error Reporting AER 3 and native hot swap functionality More recent revisions of the PCIe standard provide hardware support for I O virtualization PCI ExpressPeripheral Component Interconnect ExpressPCI Express logoYear created2003 20 years ago 2003 Created byIntelDellHPIBMSupersedesPCIPCI XAGPWidth in bits1 per lane up to 16 lanes No of devices1 on each endpoint of each connection a SpeedDual simplex examples in single lane x1 and 16 lane x16 Version 1 x 2 5 GT s x1 250 MB sx16 4 GB sVersion 2 x 5 GT s x1 500 MB sx16 8 GB sVersion 3 x 8 GT s x1 985 MB sx16 15 75 GB sVersion 4 0 16 GT s x1 1 97 GB sx16 31 5 GB sVersion 5 0 32 GT s x1 3 94 GB sx16 63 GB sVersion 6 0 64 GT s x1 7 56 GB sx16 121 GB sVersion 7 0 128 GT s x1 15 13 GB sx16 242 GB sStyleSerialHotplugging interfaceYes with ExpressCard OCuLink CFexpress or U 2 External interfaceYes with OCuLink or PCI Express External Cabling Websitepcisig wbr comTwo types of PCIe slot on an Asus H81M K motherboard Various slots on a computer motherboard from top to bottom PCI Express x4PCI Express x16PCI Express x1PCI Express X16Conventional PCI 32 bit 5 V The PCI Express electrical interface is measured by the number of simultaneous lanes 4 A lane is a single send receive line of data The analogy is a highway with traffic in both directions The interface is also used in a variety of other standards most notably the laptop expansion card interface called ExpressCard It is also used in the storage interfaces of SATA Express U 2 SFF 8639 and M 2 Format specifications are maintained and developed by the PCI SIG PCI Special Interest Group a group of more than 900 companies that also maintains the conventional PCI specifications Contents 1 Architecture 1 1 Interconnect 1 2 Lane 1 3 Serial bus 2 Form factors 2 1 PCI Express standard 2 1 1 Non standard video card form factors 2 1 2 Pinout 2 1 3 Power 2 2 PCI Express Mini Card 2 2 1 Physical dimensions 2 2 2 Electrical interface 2 2 3 Mini SATA mSATA variant 2 3 PCI Express M 2 2 4 PCI Express External Cabling 2 4 1 PCI Express OCuLink 2 5 Derivative forms 3 History and revisions 3 1 PCI Express 1 0a 3 1 1 PCI Express 1 1 3 2 PCI Express 2 0 3 2 1 PCI Express 2 1 3 3 PCI Express 3 0 3 3 1 PCI Express 3 1 3 4 PCI Express 4 0 3 5 PCI Express 5 0 3 6 PCI Express 6 0 3 7 PCI Express 7 0 4 Extensions and future directions 4 1 Draft process 5 Hardware protocol summary 5 1 Physical layer 5 1 1 Data transmission 5 2 Data link layer 5 3 Transaction layer 5 4 Efficiency of the link 6 Applications 6 1 External GPUs 6 2 Storage devices 6 3 Cluster interconnect 7 Competing protocols 8 Integrators list 9 See also 10 Notes 11 References 12 Further reading 13 External linksArchitecture Edit Example of the PCI Express topology white junction boxes represent PCI Express device downstream ports while the gray ones represent upstream ports 5 7 PCI Express x1 card containing a PCI Express switch covered by a small heat sink which creates multiple endpoints out of one endpoint and lets multiple devices share it The PCIe slots on a motherboard are often labeled with the number of PCIe lanes they have Sometimes what may seem like a large slot may only have a few lanes For instance an x16 slot with only 4 PCIe lanes is quite common 6 Conceptually the PCI Express bus is a high speed serial replacement of the older PCI PCI X bus 7 One of the key differences between the PCI Express bus and the older PCI is the bus topology PCI uses a shared parallel bus architecture in which the PCI host and all devices share a common set of address data and control lines In contrast PCI Express is based on point to point topology with separate serial links connecting every device to the root complex host Because of its shared bus topology access to the older PCI bus is arbitrated in the case of multiple masters and limited to one master at a time in a single direction Furthermore the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus regardless of the devices involved in the bus transaction In contrast a PCI Express bus link supports full duplex communication between any two endpoints with no inherent limitation on concurrent access across multiple endpoints In terms of bus protocol PCI Express communication is encapsulated in packets The work of packetizing and de packetizing data and status message traffic is handled by the transaction layer of the PCI Express port described later Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors and thus new motherboards and new adapter boards PCI slots and PCI Express slots are not interchangeable At the software level PCI Express preserves backward compatibility with PCI legacy PCI system software can detect and configure newer PCI Express devices without explicit support for the PCI Express standard though new PCI Express features are inaccessible The PCI Express link between two devices can vary in size from one to 16 lanes In a multi lane link the packet data is striped across lanes and peak data throughput scales with the overall link width The lane count is automatically negotiated during device initialization and can be restricted by either endpoint For example a single lane PCI Express x1 card can be inserted into a multi lane slot x4 x8 etc and the initialization cycle auto negotiates the highest mutually supported lane count The link can dynamically down configure itself to use fewer lanes providing a failure tolerance in case bad or unreliable lanes are present The PCI Express standard defines link widths of x1 x2 x4 x8 and x16 Up to and including PCIe 5 0 x12 and x32 links were defined as well but never used 8 This allows the PCI Express bus to serve both cost sensitive applications where high throughput is not needed and performance critical applications such as 3D graphics networking 10 Gigabit Ethernet or multiport Gigabit Ethernet and enterprise storage SAS or Fibre Channel Slots and connectors are only defined for a subset of these widths with link widths in between using the next larger physical slot size As a point of reference a PCI X 133 MHz 64 bit device and a PCI Express 1 0 device using four lanes x4 have roughly the same peak single direction transfer rate of 1064 MB s The PCI Express bus has the potential to perform better than the PCI X bus in cases where multiple devices are transferring data simultaneously or if communication with the PCI Express peripheral is bidirectional Interconnect Edit A PCI Express link between two devices consists of one or more lanes which are dual simplex channels using two differential signaling pairs 5 3 PCI Express devices communicate via a logical connection called an interconnect 9 or link A link is a point to point communication channel between two PCI Express ports allowing both of them to send and receive ordinary PCI requests configuration I O or memory read write and interrupts INTx MSI or MSI X At the physical level a link is composed of one or more lanes 9 Low speed peripherals such as an 802 11 Wi Fi card use a single lane x1 link while a graphics adapter typically uses a much wider and therefore faster 16 lane x16 link Lane Edit A lane is composed of two differential signaling pairs with one pair for receiving data and the other for transmitting Thus each lane is composed of four wires or signal traces Conceptually each lane is used as a full duplex byte stream transporting data packets in eight bit byte format simultaneously in both directions between endpoints of a link 10 Physical PCI Express links may contain 1 4 8 or 16 lanes 11 5 4 5 9 Lane counts are written with an x prefix for example x8 represents an eight lane card or slot with x16 being the largest size in common use 12 Lane sizes are also referred to via the terms width or by e g an eight lane slot could be referred to as a by 8 or as 8 lanes wide For mechanical card sizes see below Serial bus Edit This section does not cite any sources Please help improve this section by adding citations to reliable sources Unsourced material may be challenged and removed March 2018 Learn how and when to remove this template message The bonded serial bus architecture was chosen over the traditional parallel bus because of the inherent limitations of the latter including half duplex operation excess signal count and inherently lower bandwidth due to timing skew Timing skew results from separate electrical signals within a parallel interface traveling through conductors of different lengths on potentially different printed circuit board PCB layers and at possibly different signal velocities Despite being transmitted simultaneously as a single word signals on a parallel interface have different travel duration and arrive at their destinations at different times When the interface clock period is shorter than the largest time difference between signal arrivals recovery of the transmitted word is no longer possible Since timing skew over a parallel bus can amount to a few nanoseconds the resulting bandwidth limitation is in the range of hundreds of megahertz Highly simplified topologies of the Legacy PCI Shared Parallel Interface and the PCIe Serial Point to Point Interface 13 A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane and there is no external clock signal since clocking information is embedded within the serial signal itself As such typical bandwidth limitations on serial signals are in the multi gigahertz range PCI Express is one example of the general trend toward replacing parallel buses with serial interconnects other examples include Serial ATA SATA USB Serial Attached SCSI SAS FireWire IEEE 1394 and RapidIO In digital video examples in common use are DVI HDMI and DisplayPort Multichannel serial design increases flexibility with its ability to allocate fewer lanes for slower devices Form factors EditPCI Express standard Edit Intel P3608 NVMe flash SSD PCI E add in cardA PCI Express card fits into a slot of its physical size or larger with x16 as the largest used but may not fit into a smaller PCI Express slot for example a x16 card may not fit into a x4 or x8 slot Some slots use open ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size An example is a x16 slot that runs at x4 which accepts any x1 x2 x4 x8 or x16 card but provides only four lanes Its specification may read as x16 x4 mode while mechanical electrical notation e g x16 x4 is also common citation needed The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate Standard mechanical sizes are x1 x4 x8 and x16 Cards with a differing number of lanes need to use the next larger mechanical size i e a x2 card uses the x4 size or a x12 card uses the x16 size The cards themselves are designed and manufactured in various sizes For example solid state drives SSDs that come in the form of PCI Express cards often use HHHL half height half length and FHHL full height half length to describe the physical dimensions of the card 14 15 PCI card type Dimensions height length width maximum mm in Full Length 111 15 312 00 20 32 4 376 12 283 0 8Half Length 111 15 167 65 20 32 4 376 0 6 600 0 8Low Profile Slim 0 68 90 167 65 20 32 2 731 0 6 600 0 8Non standard video card form factors Edit Modern since c 2012 16 gaming video cards usually exceed the height as well as thickness specified in the PCI Express standard due to the need for more capable and quieter cooling fans as gaming video cards often emit hundreds of watts of heat 17 Modern computer cases are often wider to accommodate these taller cards but not always Since full length cards 312 mm are uncommon modern cases sometimes cannot fit those The thickness of these cards also typically occupies the space of 2 PCIe slots In fact even the methodology of how to measure the cards varies between vendors with some including the metal bracket size in dimensions and others not For instance a 2020 Sapphire card measures 135 mm in height excluding the metal bracket which exceeds the PCIe standard height by 28 mm 18 Another card by XFX measures 55 mm thick i e 2 7 PCI slots at 20 32 mm taking up 3 PCIe slots 19 The Asus GeForce RTX 3080 10 GB STRIX GAMING OC video card is a two slot card that has dimensions of 318 5 mm 140 1 mm 57 8 mm exceeding PCI Express maximum length height and thickness respectively 20 Pinout Edit The following table identifies the conductors on each side of the edge connector on a PCI Express card The solder side of the printed circuit board PCB is the A side and the component side is the B side 21 PRSNT1 and PRSNT2 pins must be slightly shorter than the rest to ensure that a hot plugged card is fully inserted The WAKE pin uses full voltage to wake the computer but must be pulled high from the standby power to indicate that the card is wake capable 22 PCI Express connector pinout x1 x4 x8 and x16 variants Pin Side B Side A Description Pin Side B Side A Description0 1 12 V PRSNT1 Must connect to farthest PRSNT2 pin 50 HSOp 8 Reserved Lane 8 transmit data and 0 2 12 V 12 V Main power pins 51 HSOn 8 Ground0 3 12 V 12 V 52 Ground HSIp 8 Lane 8 receive data and 0 4 Ground Ground 53 Ground HSIn 8 0 5 SMCLK TCK SMBus and JTAG port pins 54 HSOp 9 Ground Lane 9 transmit data and 0 6 SMDAT TDI 55 HSOn 9 Ground0 7 Ground TDO 56 Ground HSIp 9 Lane 9 receive data and 0 8 3 3 V TMS 57 Ground HSIn 9 0 9 TRST 3 3 V 58 HSOp 10 Ground Lane 10 transmit data and 10 3 3 V aux 3 3 V Aux power amp Standby power 59 HSOn 10 Ground11 WAKE PERST Link reactivation fundamental reset 23 60 Ground HSIp 10 Lane 10 receive data and Key notch 61 Ground HSIn 10 12 CLKREQ 24 Ground Clock Request Signal 62 HSOp 11 Ground Lane 11 transmit data and 13 Ground REFCLK Reference clock differential pair 63 HSOn 11 Ground14 HSOp 0 REFCLK Lane 0 transmit data and 64 Ground HSIp 11 Lane 11 receive data and 15 HSOn 0 Ground 65 Ground HSIn 11 16 Ground HSIp 0 Lane 0 receive data and 66 HSOp 12 Ground Lane 12 transmit data and 17 PRSNT2 HSIn 0 67 HSOn 12 Ground18 Ground Ground 68 Ground HSIp 12 Lane 12 receive data and PCI Express x1 cards end at pin 18 69 Ground HSIn 12 19 HSOp 1 Reserved Lane 1 transmit data and 70 HSOp 13 Ground Lane 13 transmit data and 20 HSOn 1 Ground 71 HSOn 13 Ground21 Ground HSIp 1 Lane 1 receive data and 72 Ground HSIp 13 Lane 13 receive data and 22 Ground HSIn 1 73 Ground HSIn 13 23 HSOp 2 Ground Lane 2 transmit data and 74 HSOp 14 Ground Lane 14 transmit data and 24 HSOn 2 Ground 75 HSOn 14 Ground25 Ground HSIp 2 Lane 2 receive data and 76 Ground HSIp 14 Lane 14 receive data and 26 Ground HSIn 2 77 Ground HSIn 14 27 HSOp 3 Ground Lane 3 transmit data and 78 HSOp 15 Ground Lane 15 transmit data and 28 HSOn 3 Ground 79 HSOn 15 Ground29 Ground HSIp 3 Lane 3 receive data and 80 Ground HSIp 15 Lane 15 receive data and 30 PWRBRK 25 HSIn 3 81 PRSNT2 HSIn 15 31 PRSNT2 Ground 82 Reserved Ground32 Ground ReservedPCI Express x4 cards end at pin 3233 HSOp 4 Reserved Lane 4 transmit data and 34 HSOn 4 Ground35 Ground HSIp 4 Lane 4 receive data and 36 Ground HSIn 4 37 HSOp 5 Ground Lane 5 transmit data and 38 HSOn 5 Ground39 Ground HSIp 5 Lane 5 receive data and 40 Ground HSIn 5 41 HSOp 6 Ground Lane 6 transmit data and 42 HSOn 6 Ground43 Ground HSIp 6 Lane 6 receive data and Legend44 Ground HSIn 6 Ground pin Zero volt reference45 HSOp 7 Ground Lane 7 transmit data and Power pin Supplies power to the PCIe card46 HSOn 7 Ground Card to host pin Signal from the card to the motherboard47 Ground HSIp 7 Lane 7 receive data and Host to card pin Signal from the motherboard to the card48 PRSNT2 HSIn 7 Open drain May be pulled low or sensed by multiple cards49 Ground Ground Sense pin Tied together on cardPCI Express x8 cards end at pin 49 Reserved Not presently used do not connectPower Edit 8 pin left and 6 pin right power connectors used on PCI Express cards All PCI express cards may consume up to 3 A at 3 3 V 9 9 W The amount of 12 V and total power they may consume depends on the form factor and the role of the card 26 35 36 27 28 x1 cards are limited to 0 5 A at 12 V 6 W and 10 W combined x4 and wider cards are limited to 2 1 A at 12 V 25 W and 25 W combined A full sized x1 card may draw up to the 25 W limits after initialization and software configuration as a high power device A full sized x16 graphics card may draw up to 5 5 A at 12 V 66 W and 75 W combined after initialization and software configuration as a high power device 22 38 39 The main 12 V power supply for the PCIe slot is pins B2 B3 side B and pins A2 A3 side A Power standby 3 3 V is pin B10 and A10 PCIe x1 cards can receive up to 25 W and x16 graphics cards can receive up to 75 W combined 29 Optional connectors add 75 W 6 pin or 150 W 8 pin of 12 V power for up to 300 W total 2 75 W 1 150 W Sense0 pin is connected to ground by the cable or power supply or float on board if cable is not connected Sense1 pin is connected to ground by the cable or power supply or float on board if cable is not connected Some cards use two 8 pin connectors but this has not been standardized yet as of 2018 update therefore such cards must not carry the official PCI Express logo This configuration allows 375 W total 1 75 W 2 150 W and will likely be standardized by PCI SIG with the PCI Express 4 0 standard needs update The 8 pin PCI Express connector could be confused with the EPS12V connector which is mainly used for powering SMP and multi core systems The power connectors are variants of the Molex Mini Fit Jr series connectors 30 Molex Mini Fit Jr part numbers 30 Pins Female receptacle on PS cable Male right angle header on PCB6 pin 45559 0002 45558 00038 pin 45587 0004 45586 0005 45586 00066 pin power connector 75 W 31 8 pin power connector 150 W 32 33 34 6 pin power connector pin map 8 pin power connector pin mapPin Description Pin Description1 12 V 1 12 V2 Not connected usually 12 V as well 2 12 V3 12 V 3 12 V4 Sense1 8 pin connected A 4 Ground 5 Ground5 Sense 6 Sense0 6 pin or 8 pin connected 6 Ground 7 Ground8 Ground When a 6 pin connector is plugged into an 8 pin receptacle the card is notified by a missing Sense1 that it may only use up to 75 W PCI Express Mini Card Edit A WLAN PCI Express Mini Card and its connector MiniPCI and MiniPCI Express cards in comparison PCI Express Mini Card also known as Mini PCI Express Mini PCIe Mini PCI E mPCIe and PEM based on PCI Express is a replacement for the Mini PCI form factor It is developed by the PCI SIG The host device supports both PCI Express and USB 2 0 connectivity and each card may use either standard Most laptop computers built after 2005 use PCI Express for expansion cards however as of 2015 update many vendors are moving toward using the newer M 2 form factor for this purpose Due to different dimensions PCI Express Mini Cards are not physically compatible with standard full size PCI Express slots however passive adapters exist that let them be used in full size slots 35 Physical dimensions Edit Dimensions of PCI Express Mini Cards are 30 mm 50 95 mm width length for a Full Mini Card There is a 52 pin edge connector consisting of two staggered rows on a 0 8 mm pitch Each row has eight contacts a gap equivalent to four contacts then a further 18 contacts Boards have a thickness of 1 0 mm excluding the components A Half Mini Card sometimes abbreviated as HMC is also specified having approximately half the physical length of 26 8 mm Electrical interface Edit PCI Express Mini Card edge connectors provide multiple connections and buses PCI Express x1 with SMBus USB 2 0 Wires to diagnostics LEDs for wireless network i e Wi Fi status on computer s chassis SIM card for GSM and WCDMA applications UIM signals on spec Future extension for another PCIe lane 1 5 V and 3 3 V powerMini SATA mSATA variant Edit An Intel mSATA SSD Despite sharing the Mini PCI Express form factor an mSATA slot is not necessarily electrically compatible with Mini PCI Express For this reason only certain notebooks are compatible with mSATA drives Most compatible systems are based on Intel s Sandy Bridge processor architecture using the Huron River platform Notebooks such as Lenovo s ThinkPad T W and X series released in March April 2011 have support for an mSATA SSD card in their WWAN card slot The ThinkPad Edge E220s E420s and the Lenovo IdeaPad Y460 Y560 Y570 Y580 also support mSATA 36 On the contrary the L series among others can only support M 2 cards using the PCIe standard in the WWAN slot Some notebooks notably the Asus Eee PC the Apple MacBook Air and the Dell mini9 and mini10 use a variant of the PCI Express Mini Card as an SSD This variant uses the reserved and several non reserved pins to implement SATA and IDE interface passthrough keeping only USB ground lines and sometimes the core PCIe x1 bus intact 37 This makes the miniPCIe flash and solid state drives sold for netbooks largely incompatible with true PCI Express Mini implementations Also the typical Asus miniPCIe SSD is 71 mm long causing the Dell 51 mm model to often be incorrectly referred to as half length A true 51 mm Mini PCIe SSD was announced in 2009 with two stacked PCB layers that allow for higher storage capacity The announced design preserves the PCIe interface making it compatible with the standard mini PCIe slot No working product has yet been developed Intel has numerous desktop boards with the PCIe x1 Mini Card slot that typically do not support mSATA SSD A list of desktop boards that natively support mSATA in the PCIe x1 Mini Card slot typically multiplexed with a SATA port is provided on the Intel Support site 38 PCI Express M 2 Edit Main article M 2 M 2 replaces the mSATA standard and Mini PCIe 39 Computer bus interfaces provided through the M 2 connector are PCI Express 3 0 up to four lanes Serial ATA 3 0 and USB 3 0 a single logical port for each of the latter two It is up to the manufacturer of the M 2 host or device to choose which interfaces to support depending on the desired level of host support and device type PCI Express External Cabling Edit PCI Express External Cabling also known as External PCI Express Cabled PCI Express or ePCIe specifications were released by the PCI SIG in February 2007 40 41 Standard cables and connectors have been defined for x1 x4 x8 and x16 link widths with a transfer rate of 250 MB s per lane The PCI SIG also expects the norm to evolve to reach 500 MB s as in PCI Express 2 0 An example of the uses of Cabled PCI Express is a metal enclosure containing a number of PCIe slots and PCIe to ePCIe adapter circuitry This device would not be possible had it not been for the ePCIe specification PCI Express OCuLink Edit OCuLink standing for optical copper link since Cu is the chemical symbol for Copper is an extension for the cable version of PCI Express acting as a competitor to version 3 of the Thunderbolt interface Version 1 0 of OCuLink released in Oct 2015 supports up to PCIe 3 0 x4 lanes 8 GT s gigatransfers per second 3 9 GB s over copper cabling a fiber optic version may appear in the future In its latest version OCuLink 2 it supports up to 16 GB s PCIe 4 0 x8 42 while the maximum bandwidth of a full speed Thunderbolt 4 cable is 5 GB s Some suppliers may design their connector product to be able to support next generation PCI Express 5 0 running at 32 GT s per lane for future proofing and minimizing development costs over the next few years 42 Initially PCI SIG expected to bring OCuLink into laptops for connection of powerful external GPU boxes It turned out to be a rare use Instead OCuLink became popular for PCIe interconnections in servers 43 Derivative forms Edit Numerous other form factors use or are able to use PCIe These include Low height card ExpressCard Successor to the PC Card form factor with x1 PCIe and USB 2 0 hot pluggable PCI Express ExpressModule A hot pluggable modular form factor defined for servers and workstations XQD card A PCI Express based flash card standard by the CompactFlash Association with x2 PCIe CFexpress card A PCI Express based flash card by the CompactFlash Association in three form factors supporting 1 to 4 PCIe lanes SD card The SD Express bus introduced in version 7 0 of the SD specification uses an x1 PCIe link XMC Similar to the CMC PMC form factor VITA 42 3 AdvancedTCA A complement to CompactPCI for larger applications supports serial based backplane topologies AMC A complement to the AdvancedTCA specification supports processor and I O modules on ATCA boards x1 x2 x4 or x8 PCIe FeaturePak A tiny expansion card format 43 mm 65 mm for embedded and small form factor applications which implements two x1 PCIe links on a high density connector along with USB I2C and up to 100 points of I O Universal IO A variant from Super Micro Computer Inc designed for use in low profile rack mounted chassis 44 It has the connector bracket reversed so it cannot fit in a normal PCI Express socket but it is pin compatible and may be inserted if the bracket is removed M 2 formerly known as NGFF M PCIe brings PCIe 3 0 to mobile devices such as tablets and smartphones over the M PHY physical layer 45 46 U 2 formerly known as SFF 8639 The PCIe slot connector can also carry protocols other than PCIe Some 9xx series Intel chipsets support Serial Digital Video Out a proprietary technology that uses a slot to transmit video signals from the host CPU s integrated graphics instead of PCIe using a supported add in The PCIe transaction layer protocol can also be used over some other interconnects which are not electrically PCIe Thunderbolt A royalty free interconnect standard by Intel that combines DisplayPort and PCIe protocols in a form factor compatible with Mini DisplayPort Thunderbolt 3 0 also combines USB 3 1 and uses the USB C form factor as opposed to Mini DisplayPort USB4History and revisions EditWhile in early development PCIe was initially referred to as HSI for High Speed Interconnect and underwent a name change to 3GIO for 3rd Generation I O before finally settling on its PCI SIG name PCI Express A technical working group named the Arapaho Work Group AWG drew up the standard For initial drafts the AWG consisted only of Intel engineers subsequently the AWG expanded to include industry partners Since PCIe has undergone several large and smaller revisions improving on performance and other features PCI Express link performance 47 48 Version Intro duced Line code Transfer rate per lane i ii Throughput i iii x1 x2 x4 x8 x161 0 2003 NRZ 8b 10b 2 5 GT s 0 250 GB s 0 500 GB s 1 000 GB s 2 000 GB s 4 000 GB s2 0 2007 5 0 GT s 0 500 GB s 1 000 GB s 2 000 GB s 4 000 GB s 8 000 GB s3 0 2010 128b 130b 8 0 GT s 0 985 GB s 1 969 GB s 3 938 GB s 0 7 877 GB s 15 754 GB s4 0 2017 16 0 GT s 1 969 GB s 3 938 GB s 0 7 877 GB s 15 754 GB s 0 31 508 GB s5 0 2019 32 0 GT s 3 938 GB s 0 7 877 GB s 15 754 GB s 31 508 GB s 63 015 GB s6 0 2022 PAM 4FEC 242B 256B FLIT 64 0 GT s 32 0 GBd 7 563 GB s 15 125 GB s 30 250 GB s 60 500 GB s 121 000 GB s7 0 2025 planned 128 0 GT s 64 0 GBd 15 125 GB s 30 250 GB s 60 500 GB s 121 000 GB s 242 000 GB sNotes a b In each direction each lane is a dual simplex channel Transfer rate refers to the encoded serial bit rate 2 5 GT s means 2 5 Gbit s serial data rate Throughput indicates the unencoded bandwidth without 8b 10b 128b 130b or 242B 256B encoding overhead The PCIe 1 0 transfer rate of 2 5 GT s per lane means a 2 5 Gbit s serial bit rate corresponding to a throughput of 2 0 Gbit s or 250 MB s prior to 8b 10b encoding PCI Express 1 0a Edit In 2003 PCI SIG introduced PCIe 1 0a with a per lane data rate of 250 MB s and a transfer rate of 2 5 gigatransfers per second GT s Transfer rate is expressed in transfers per second instead of bits per second because the number of transfers includes the overhead bits which do not provide additional throughput 49 PCIe 1 x uses an 8b 10b encoding scheme resulting in a 20 2 10 overhead on the raw channel bandwidth 50 So in the PCIe terminology transfer rate refers to the encoded bit rate 2 5 GT s is 2 5 Gbps on the encoded serial link This corresponds to 2 0 Gbps of pre coded data or 250 MB s which is referred to as throughput in PCIe PCI Express 1 1 Edit In 2005 PCI SIG 51 introduced PCIe 1 1 This updated specification includes clarifications and several improvements but is fully compatible with PCI Express 1 0a No changes were made to the data rate PCI Express 2 0 Edit A PCI Express 2 0 expansion card that provides USB 3 0 connectivity b PCI SIG announced the availability of the PCI Express Base 2 0 specification on 15 January 2007 52 The PCIe 2 0 standard doubles the transfer rate compared with PCIe 1 0 to 5 GT s and the per lane throughput rises from 250 MB s to 500 MB s Consequently a 16 lane PCIe connector x16 can support an aggregate throughput of up to 8 GB s PCIe 2 0 motherboard slots are fully backward compatible with PCIe v1 x cards PCIe 2 0 cards are also generally backward compatible with PCIe 1 x motherboards using the available bandwidth of PCI Express 1 1 Overall graphic cards or motherboards designed for v2 0 work with the other being v1 1 or v1 0a The PCI SIG also said that PCIe 2 0 features improvements to the point to point data transfer protocol and its software architecture 53 Intel s first PCIe 2 0 capable chipset was the X38 and boards began to ship from various vendors Abit Asus Gigabyte as of 21 October 2007 54 AMD started supporting PCIe 2 0 with its AMD 700 chipset series and nVidia started with the MCP72 55 All of Intel s prior chipsets including the Intel P35 chipset supported PCIe 1 1 or 1 0a 56 Like 1 x PCIe 2 0 uses an 8b 10b encoding scheme therefore delivering per lane an effective 4 Gbit s max transfer rate from its 5 GT s raw data rate PCI Express 2 1 Edit PCI Express 2 1 with its specification dated 4 March 2009 supports a large proportion of the management support and troubleshooting systems planned for full implementation in PCI Express 3 0 However the speed is the same as PCI Express 2 0 The increase in power from the slot breaks backward compatibility between PCI Express 2 1 cards and some older motherboards with 1 0 1 0a but most motherboards with PCI Express 1 1 connectors are provided with a BIOS update by their manufacturers through utilities to support backward compatibility of cards with PCIe 2 1 PCI Express 3 0 Edit PCI Express 3 0 Base specification revision 3 0 was made available in November 2010 after multiple delays In August 2007 PCI SIG announced that PCI Express 3 0 would carry a bit rate of 8 gigatransfers per second GT s and that it would be backward compatible with existing PCI Express implementations At that time it was also announced that the final specification for PCI Express 3 0 would be delayed until Q2 2010 57 New features for the PCI Express 3 0 specification included a number of optimizations for enhanced signaling and data integrity including transmitter and receiver equalization PLL improvements clock data recovery and channel enhancements of currently supported topologies 58 Following a six month technical analysis of the feasibility of scaling the PCI Express interconnect bandwidth PCI SIG s analysis found that 8 gigatransfers per second could be manufactured in mainstream silicon process technology and deployed with existing low cost materials and infrastructure while maintaining full compatibility with negligible impact with the PCI Express protocol stack PCI Express 3 0 upgraded the encoding scheme to 128b 130b from the previous 8b 10b encoding reducing the bandwidth overhead from 20 of PCI Express 2 0 to approximately 1 54 2 130 PCI Express 3 0 s 8 GT s bit rate effectively delivers 985 MB s per lane nearly doubling the lane bandwidth relative to PCI Express 2 0 48 On 18 November 2010 the PCI Special Interest Group officially published the finalized PCI Express 3 0 specification to its members to build devices based on this new version of PCI Express 59 PCI Express 3 1 Edit In September 2013 PCI Express 3 1 specification was announced for release in late 2013 or early 2014 consolidating various improvements to the published PCI Express 3 0 specification in three areas power management performance and functionality 46 60 It was released in November 2014 61 PCI Express 4 0 Edit On 29 November 2011 PCI SIG preliminarily announced PCI Express 4 0 62 providing a 16 GT s bit rate that doubles the bandwidth provided by PCI Express 3 0 to 31 5 GB s in each direction for a 16 lane configuration while maintaining backward and forward compatibility in both software support and used mechanical interface 63 PCI Express 4 0 specs also bring OCuLink 2 an alternative to Thunderbolt OCuLink version 2 has up to 16 GT s 16 GB s total for x8 lanes 42 while the maximum bandwidth of a Thunderbolt 3 link is 5 GB s In June 2016 Cadence PLDA and Synopsys demoed PCIe 4 0 physical layer controller switch and other IP blocks at the PCI SIG s annual developer s conference 64 Mellanox Technologies announced the first 100 Gbit s network adapter with PCIe 4 0 on 15 June 2016 65 and the first 200 Gbit s network adapter with PCIe 4 0 on 10 November 2016 66 In August 2016 Synopsys presented a test setup with FPGA clocking a lane to PCIe 4 0 speeds at the Intel Developer Forum Their IP has been licensed to several firms planning to present their chips and products at the end of 2016 67 On the IEEE Hot Chips Symposium in August 2016 IBM announced the first CPU with PCIe 4 0 support POWER9 68 69 PCI SIG officially announced the release of the final PCI Express 4 0 specification on 8 June 2017 70 The spec includes improvements in flexibility scalability and lower power On 5 December 2017 IBM announced the first system with PCIe 4 0 slots Power AC922 71 72 NETINT Technologies introduced the first NVMe SSD based on PCIe 4 0 on 17 July 2018 ahead of Flash Memory Summit 2018 73 AMD announced on 9 January 2019 its upcoming Zen 2 based processors and X570 chipset would support PCIe 4 0 74 AMD had hoped to enable partial support for older chipsets but instability caused by motherboard traces not conforming to PCIe 4 0 specifications made that impossible 75 76 Intel released their first mobile CPUs with PCI express 4 0 support in mid 2020 as a part of the Tiger Lake microarchitecture 77 PCI Express 5 0 Edit In June 2017 PCI SIG announced the PCI Express 5 0 preliminary specification 70 Bandwidth was expected to increase to 32 GT s yielding 63 GB s in each direction in a 16 lane configuration The draft spec was expected to be standardized in 2019 citation needed Initially 25 0 GT s was also considered for technical feasibility On 7 June 2017 at PCI SIG DevCon Synopsys recorded the first demonstration of PCI Express 5 0 at 32 GT s 78 On 31 May 2018 PLDA announced the availability of their XpressRICH5 PCIe 5 0 Controller IP based on draft 0 7 of the PCIe 5 0 specification on the same day 79 80 On 10 December 2018 the PCI SIG released version 0 9 of the PCIe 5 0 specification to its members 81 and on 17 January 2019 PCI SIG announced the version 0 9 had been ratified with version 1 0 targeted for release in the first quarter of 2019 82 On 29 May 2019 PCI SIG officially announced the release of the final PCI Express 5 0 specification 83 On 20 November 2019 Jiangsu Huacun presented the first PCIe 5 0 Controller HC9001 in a 12 nm manufacturing process 84 Production started in 2020 On 17 August 2020 IBM announced the Power10 processor with PCIe 5 0 and up to 32 lanes per single chip module SCM and up to 64 lanes per double chip module DCM 85 On 9 September 2021 IBM announced the Power E1080 Enterprise server with planned availability date 17 September 86 It can have up to 16 Power10 SCMs with maximum of 32 slots per system which can act as PCIe 5 0 x8 or PCIe 4 0 x16 87 Alternatively they can be used as PCIe 5 0 x16 slots for optional optical CXP converter adapters connecting to external PCIe expansion drawers On 27 October 2021 Intel announced the 12th Gen Intel Core CPU family the world s first consumer x86 64 processors with PCIe 5 0 up to 16 lanes connectivity 88 On 22 March 2022 Nvidia announced Nvidia Hopper GH100 GPU the world s first PCIe 5 0 GPU 89 On 23 May 2022 AMD announced its Zen 4 architecture with support for up to 24 lanes of PCIe 5 0 connectivity on consumer platforms and 128 lanes on server platforms 90 91 PCI Express 6 0 Edit On 18 June 2019 PCI SIG announced the development of PCI Express 6 0 specification Bandwidth is expected to increase to 64 GT s yielding 128 GB s in each direction in a 16 lane configuration with a target release date of 2021 92 The new standard uses 4 level pulse amplitude modulation PAM 4 with a low latency forward error correction FEC in place of non return to zero NRZ modulation 93 Unlike previous PCI Express versions forward error correction is used to increase data integrity and PAM 4 is used as line code so that two bits are transferred per transfer With 64 GT s data transfer rate raw bit rate up to 121 GB s in each direction is possible in x16 configuration 92 On 24 February 2020 the PCI Express 6 0 revision 0 5 specification a first draft with all architectural aspects and requirements defined was released 94 On 5 November 2020 the PCI Express 6 0 revision 0 7 specification a complete draft with electrical specifications validated via test chips was released 95 On 6 October 2021 the PCI Express 6 0 revision 0 9 specification a final draft was released 96 On 11 January 2022 PCI SIG officially announced the release of the final PCI Express 6 0 specification 97 PAM 4 coding results in a vastly higher bit error rate BER of 10 6 vs 10 12 previously so in place of 128b 130b encoding a 3 way interlaced forward error correction FEC is used in addition to cyclic redundancy check CRC A fixed 256 byte Flow Control Unit FLIT block carries 242 bytes of data which includes variable sized transaction level packets TLP and data link layer payload DLLP remaining 14 bytes are reserved for 8 byte CRC and 6 byte FEC 98 99 3 way Gray code is used in PAM 4 FLIT mode to reduce error rate the interface does not switch to NRZ and 128 130b encoding even when retraining to lower data rates 100 101 PCI Express 7 0 Edit On 21 June 2022 PCI SIG announced the development of PCI Express 7 0 specification 102 It will deliver 128 GT s raw bit rate and up to 242 GB s per direction in x16 configuration using the same PAM4 signaling as version 6 0 Doubling of the data rate will be achieved by fine tuning channel parameters to decrease signal losses and improve power efficiency The specification is expected to be finalised in 2025 Extensions and future directions EditSome vendors offer PCIe over fiber products 103 104 105 with active optical cables AOC for PCIe switching at increased distance in PCIe expansion drawers 106 87 or in specific cases where transparent PCIe bridging is preferable to using a more mainstream standard such as InfiniBand or Ethernet that may require additional software to support it Thunderbolt was co developed by Intel and Apple as a general purpose high speed interface combining a logical PCIe link with DisplayPort and was originally intended as an all fiber interface but due to early difficulties in creating a consumer friendly fiber interconnect nearly all implementations are copper systems A notable exception the Sony VAIO Z VPC Z2 uses a nonstandard USB port with an optical component to connect to an outboard PCIe display adapter Apple has been the primary driver of Thunderbolt adoption through 2011 though several other vendors 107 have announced new products and systems featuring Thunderbolt Thunderbolt 3 forms the basis of the USB4 standard Mobile PCIe specification abbreviated to M PCIe allows PCI Express architecture to operate over the MIPI Alliance s M PHY physical layer technology Building on top of already existing widespread adoption of M PHY and its low power design Mobile PCIe lets mobile devices use PCI Express 108 Draft process Edit There are 5 primary releases checkpoints in a PCI SIG specification 109 Draft 0 3 Concept this release may have few details but outlines the general approach and goals Draft 0 5 First draft this release has a complete set of architectural requirements and must fully address the goals set out in the 0 3 draft Draft 0 7 Complete draft this release must have a complete set of functional requirements and methods defined and no new functionality may be added to the specification after this release Before the release of this draft electrical specifications must have been validated via test silicon Draft 0 9 Final draft this release allows PCI SIG member companies to perform an internal review for intellectual property and no functional changes are permitted after this draft 1 0 Final release this is the final and definitive specification and any changes or enhancements are through Errata documentation and Engineering Change Notices ECNs respectively Historically the earliest adopters of a new PCIe specification generally begin designing with the Draft 0 5 as they can confidently build up their application logic around the new bandwidth definition and often even start developing for any new protocol features At the Draft 0 5 stage however there is still a strong likelihood of changes in the actual PCIe protocol layer implementation so designers responsible for developing these blocks internally may be more hesitant to begin work than those using interface IP from external sources Hardware protocol summary EditThe PCIe link is built around dedicated unidirectional couples of serial 1 bit point to point connections known as lanes This is in sharp contrast to the earlier PCI connection which is a bus based system where all the devices share the same bidirectional 32 bit or 64 bit parallel bus PCI Express is a layered protocol consisting of a transaction layer a data link layer and a physical layer The Data Link Layer is subdivided to include a media access control MAC sublayer The Physical Layer is subdivided into logical and electrical sublayers The Physical logical sublayer contains a physical coding sublayer PCS The terms are borrowed from the IEEE 802 networking protocol model Physical layer Edit Connector pins and lengths Lanes Pins LengthTotal Variable Total Variable0 x1 2 18 0 36 110 2 0 7 0 14 25 mm 0 7 65 mm0 x4 2 32 0 64 2 21 0 42 39 mm 21 65 mm0 x8 2 49 0 98 2 38 0 76 56 mm 38 65 mmx16 2 82 164 2 71 142 89 mm 71 65 mm An open end PCI Express x1 connector lets longer cards that use more lanes be plugged while operating at x1 speeds The PCIe Physical Layer PHY PCIEPHY PCI Express PHY or PCIe PHY specification is divided into two sub layers corresponding to electrical and logical specifications The logical sublayer is sometimes further divided into a MAC sublayer and a PCS although this division is not formally part of the PCIe specification A specification published by Intel the PHY Interface for PCI Express PIPE 111 defines the MAC PCS functional partitioning and the interface between these two sub layers The PIPE specification also identifies the physical media attachment PMA layer which includes the serializer deserializer SerDes and other analog circuitry however since SerDes implementations vary greatly among ASIC vendors PIPE does not specify an interface between the PCS and PMA At the electrical level each lane consists of two unidirectional differential pairs operating at 2 5 5 8 16 or 32 Gbit s depending on the negotiated capabilities Transmit and receive are separate differential pairs for a total of four data wires per lane A connection between any two PCIe devices is known as a link and is built up from a collection of one or more lanes All devices must minimally support single lane x1 link Devices may optionally support wider links composed of up to 32 lanes 112 113 This allows for very good compatibility in two ways A PCIe card physically fits and works correctly in any slot that is at least as large as it is e g an x1 sized card works in any sized slot A slot of a large physical size e g x16 can be wired electrically with fewer lanes e g x1 x4 x8 or x12 as long as it provides the ground connections required by the larger physical slot size In both cases PCIe negotiates the highest mutually supported number of lanes Many graphics cards motherboards and BIOS versions are verified to support x1 x4 x8 and x16 connectivity on the same connection The width of a PCIe connector is 8 8 mm while the height is 11 25 mm and the length is variable The fixed section of the connector is 11 65 mm in length and contains two rows of 11 pins each 22 pins total while the length of the other section is variable depending on the number of lanes The pins are spaced at 1 mm intervals and the thickness of the card going into the connector is 1 6 mm 114 115 Data transmission Edit PCIe sends all control messages including interrupts over the same links used for data The serial protocol can never be blocked so latency is still comparable to conventional PCI which has dedicated interrupt lines When the problem of IRQ sharing of pin based interrupts is taken into account and the fact that message signaled interrupts MSI can bypass an I O APIC and be delivered to the CPU directly MSI performance ends up being substantially better 116 Data transmitted on multiple lane links is interleaved meaning that each successive byte is sent down successive lanes The PCIe specification refers to this interleaving as data striping While requiring significant hardware complexity to synchronize or deskew the incoming striped data striping can significantly reduce the latency of the nth byte on a link While the lanes are not tightly synchronized there is a limit to the lane to lane skew of 20 8 6 ns for 2 5 5 8 GT s so the hardware buffers can re align the striped data 117 Due to padding requirements striping may not necessarily reduce the latency of small data packets on a link As with other high data rate serial transmission protocols the clock is embedded in the signal At the physical level PCI Express 2 0 utilizes the 8b 10b encoding scheme 48 line code to ensure that strings of consecutive identical digits zeros or ones are limited in length This coding was used to prevent the receiver from losing track of where the bit edges are In this coding scheme every eight uncoded payload bits of data are replaced with 10 encoded bits of transmit data causing a 20 overhead in the electrical bandwidth To improve the available bandwidth PCI Express version 3 0 instead uses 128b 130b encoding 1 54 overhead Line encoding limits the run length of identical digit strings in data streams and ensures the receiver stays synchronised to the transmitter via clock recovery A desirable balance and therefore spectral density of 0 and 1 bits in the data stream is achieved by XORing a known binary polynomial as a scrambler to the data stream in a feedback topology Because the scrambling polynomial is known the data can be recovered by applying the XOR a second time Both the scrambling and descrambling steps are carried out in hardware Data link layer Edit The data link layer performs three vital services for the PCIe link sequence the transaction layer packets TLPs that are generated by the transaction layer ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol ACK and NAK signaling that explicitly requires replay of unacknowledged bad TLPs initialize and manage flow control creditsOn the transmit side the data link layer generates an incrementing sequence number for each outgoing TLP It serves as a unique identification tag for each transmitted TLP and is inserted into the header of the outgoing TLP A 32 bit cyclic redundancy check code known in this context as Link CRC or LCRC is also appended to the end of each outgoing TLP On the receive side the received TLP s LCRC and sequence number are both validated in the link layer If either the LCRC check fails indicating a data error or the sequence number is out of range non consecutive from the last valid received TLP then the bad TLP as well as any TLPs received after the bad TLP are considered invalid and discarded The receiver sends a negative acknowledgement message NAK with the sequence number of the invalid TLP requesting re transmission of all TLPs forward of that sequence number If the received TLP passes the LCRC check and has the correct sequence number it is treated as valid The link receiver increments the sequence number which tracks the last received good TLP and forwards the valid TLP to the receiver s transaction layer An ACK message is sent to remote transmitter indicating the TLP was successfully received and by extension all TLPs with past sequence numbers If the transmitter receives a NAK message or no acknowledgement NAK or ACK is received until a timeout period expires the transmitter must retransmit all TLPs that lack a positive acknowledgement ACK Barring a persistent malfunction of the device or transmission medium the link layer presents a reliable connection to the transaction layer since the transmission protocol ensures delivery of TLPs over an unreliable medium In addition to sending and receiving TLPs generated by the transaction layer the data link layer also generates and consumes data link layer packets DLLPs ACK and NAK signals are communicated via DLLPs as are some power management messages and flow control credit information on behalf of the transaction layer In practice the number of in flight unacknowledged TLPs on the link is limited by two factors the size of the transmitter s replay buffer which must store a copy of all transmitted TLPs until the remote receiver ACKs them and the flow control credits issued by the receiver to a transmitter PCI Express requires all receivers to issue a minimum number of credits to guarantee a link allows sending PCIConfig TLPs and message TLPs Transaction layer Edit PCI Express implements split transactions transactions with request and response separated by time allowing the link to carry other traffic while the target device gathers data for the response PCI Express uses credit based flow control In this scheme a device advertises an initial amount of credit for each received buffer in its transaction layer The device at the opposite end of the link when sending transactions to this device counts the number of credits each TLP consumes from its account The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit When the receiving device finishes processing the TLP from its buffer it signals a return of credits to the sending device which increases the credit limit by the restored amount The credit counters are modular counters and the comparison of consumed credits to credit limit requires modular arithmetic The advantage of this scheme compared to other methods such as wait states or handshake based transfer protocols is that the latency of credit return does not affect performance provided that the credit limit is not encountered This assumption is generally met if each device is designed with adequate buffer sizes PCIe 1 x is often quoted to support a data rate of 250 MB s in each direction per lane This figure is a calculation from the physical signaling rate 2 5 gigabaud divided by the encoding overhead 10 bits per byte This means a sixteen lane x16 PCIe card would then be theoretically capable of 16x250 MB s 4 GB s in each direction While this is correct in terms of data bytes more meaningful calculations are based on the usable data payload rate which depends on the profile of the traffic which is a function of the high level software application and intermediate protocol levels Like other high data rate serial interconnect systems PCIe has a protocol and processing overhead due to the additional transfer robustness CRC and acknowledgements Long continuous unidirectional transfers such as those typical in high performance storage controllers can approach gt 95 of PCIe s raw lane data rate These transfers also benefit the most from increased number of lanes x2 x4 etc But in more typical applications such as a USB or Ethernet controller the traffic profile is characterized as short data packets with frequent enforced acknowledgements 118 This type of traffic reduces the efficiency of the link due to overhead from packet parsing and forced interrupts either in the device s host interface or the PC s CPU Being a protocol for devices connected to the same printed circuit board it does not require the same tolerance for transmission errors as a protocol for communication over longer distances and thus this loss of efficiency is not particular to PCIe Efficiency of the link Edit As for any network like communication links some of the raw bandwidth is consumed by protocol overhead 119 A PCIe 1 x lane for example offers a data rate on top of the physical layer of 250 MB s simplex This isn t the payload bandwidth but the physical layer bandwidth a PCIe lane has to carry additional information for full functionality 119 Gen 2 Transaction Layer Packet 119 3 Layer PHY Data Link Layer Transaction Data Link Layer PHYData Start Sequence Header Payload ECRC LCRC EndSize Bytes 1 2 12 or 16 0 to 4096 4 optional 4 1The Gen2 overhead is then 20 24 or 28 bytes per transaction clarification needed citation needed Gen 3 Transaction Layer Packet 119 3 Layer G3 PHY Data Link Layer Transaction Layer Data Link LayerData Start Sequence Header Payload ECRC LCRCSize Bytes 4 2 12 or 16 0 to 4096 4 optional 4The Gen3 overhead is then 22 26 or 30 bytes per transaction clarification needed citation needed The Packet Efficiency Payload Payload Overhead displaystyle text Packet Efficiency frac text Payload text Payload text Overhead for a 128 byte payload is 86 and 98 for a 1024 byte payload For small accesses like register settings 4 bytes the efficiency drops as low as 16 citation needed The maximum payload size MPS is set on all devices based on smallest maximum on any device in the chain If one device has an MPS of 128 bytes all devices of the tree must set their MPS to 128 bytes In this case the bus will have a peak efficiency of 86 for writes 119 3 Applications Edit Asus Nvidia GeForce GTX 650 Ti a PCI Express 3 0 x16 graphics card The Nvidia GeForce GTX 1070 a PCI Express 3 0 x16 Graphics card Intel 82574L Gigabit Ethernet NIC a PCI Express x1 card A Marvell based SATA 3 0 controller as a PCI Express x1 card PCI Express operates in consumer server and industrial applications as a motherboard level interconnect to link motherboard mounted peripherals a passive backplane interconnect and as an expansion card interface for add in boards In virtually all modern as of 2012 update PCs from consumer laptops and desktops to enterprise data servers the PCIe bus serves as the primary motherboard level interconnect connecting the host system processor with both integrated peripherals surface mounted ICs and add on peripherals expansion cards In most of these systems the PCIe bus co exists with one or more legacy PCI buses for backward compatibility with the large body of legacy PCI peripherals As of 2013 update PCI Express has replaced AGP as the default interface for graphics cards on new systems Almost all models of graphics cards released since 2010 by AMD ATI and Nvidia use PCI Express Nvidia uses the high bandwidth data transfer of PCIe for its Scalable Link Interface SLI technology which allows multiple graphics cards of the same chipset and model number to run in tandem allowing increased performance citation needed AMD has also developed a multi GPU system based on PCIe called CrossFire citation needed AMD Nvidia and Intel have released motherboard chipsets that support as many as four PCIe x16 slots allowing tri GPU and quad GPU card configurations External GPUs Edit Theoretically external PCIe could give a notebook the graphics power of a desktop by connecting a notebook with any PCIe desktop video card enclosed in its own external housing with a power supply and cooling this is possible with an ExpressCard or Thunderbolt interface An ExpressCard interface provides bit rates of 5 Gbit s 0 5 GB s throughput whereas a Thunderbolt interface provides bit rates of up to 40 Gbit s 5 GB s throughput In 2006 Nvidia developed the Quadro Plex external PCIe family of GPUs that can be used for advanced graphic applications for the professional market 120 These video cards require a PCI Express x8 or x16 slot for the host side card which connects to the Plex via a VHDCI carrying eight PCIe lanes 121 In 2008 AMD announced the ATI XGP technology based on a proprietary cabling system that is compatible with PCIe x8 signal transmissions 122 This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter 123 Around 2010 Acer launched the Dynavivid graphics dock for XGP 124 In 2010 external card hubs were introduced that can connect to a laptop or desktop through a PCI ExpressCard slot These hubs can accept full sized graphics cards Examples include MSI GUS 125 Village Instrument s ViDock 126 the Asus XG Station Bplus PE4H V3 2 adapter 127 as well as more improvised DIY devices 128 However such solutions are limited by the size often only x1 and version of the available PCIe slot on a laptop The Intel Thunderbolt interface has provided a new option to connect with a PCIe card externally Magma has released the ExpressBox 3T which can hold up to three PCIe cards two at x8 and one at x4 129 MSI also released the Thunderbolt GUS II a PCIe chassis dedicated for video cards 130 Other products such as the Sonnet s Echo Express 131 and mLogic s mLink are Thunderbolt PCIe chassis in a smaller form factor 132 In 2017 more fully featured external card hubs were introduced such as the Razer Core which has a full length PCIe x16 interface 133 Storage devices Edit An OCZ RevoDrive SSD a full height x4 PCI Express card See also SATA Express and NVMe The PCI Express protocol can be used as data interface to flash memory devices such as memory cards and solid state drives SSDs The XQD card is a memory card format utilizing PCI Express developed by the CompactFlash Association with transfer rates of up to 1GB s 134 Many high performance enterprise class SSDs are designed as PCI Express RAID controller cards citation needed Before NVMe was standardized many of these cards utilized proprietary interfaces and custom drivers to communicate with the operating system they had much higher transfer rates over 1 GB s and IOPS over one million I O operations per second when compared to Serial ATA or SAS drives quantify 135 136 For example in 2011 OCZ and Marvell co developed a native PCI Express solid state drive controller for a PCI Express 3 0 x16 slot with maximum capacity of 12 TB and a performance of to 7 2 GB s sequential transfers and up to 2 52 million IOPS in random transfers 137 relevant SATA Express was an interface for connecting SSDs through SATA compatible ports optionally providing multiple PCI Express lanes as a pure PCI Express connection to the attached storage device 138 M 2 is a specification for internally mounted computer expansion cards and associated connectors which also uses multiple PCI Express lanes 139 PCI Express storage devices can implement both AHCI logical interface for backward compatibility and NVM Express logical interface for much faster I O operations provided by utilizing internal parallelism offered by such devices Enterprise class SSDs can also implement SCSI over PCI Express 140 Cluster interconnect Edit Certain data center applications such as large computer clusters require the use of fiber optic interconnects due to the distance limitations inherent in copper cabling Typically a network oriented standard such as Ethernet or Fibre Channel suffices for these applications but in some cases the overhead introduced by routable protocols is undesirable and a lower level interconnect such as InfiniBand RapidIO or NUMAlink is needed Local bus standards such as PCIe and HyperTransport can in principle be used for this purpose 141 but as of 2015 update solutions are only available from niche vendors such as Dolphin ICS and TTTech Auto Competing protocols EditOther communications standards based on high bandwidth serial architectures include InfiniBand RapidIO HyperTransport Intel QuickPath Interconnect and the Mobile Industry Processor Interface MIPI The differences are based on the trade offs between flexibility and extensibility vs latency and overhead For example making the system hot pluggable as with Infiniband but not PCI Express requires that software track network topology changes citation needed Another example is making the packets shorter to decrease latency as is required if a bus must operate as a memory interface Smaller packets mean packet headers consume a higher percentage of the packet thus decreasing the effective bandwidth Examples of bus protocols designed for this purpose are RapidIO and HyperTransport citation needed PCI Express falls somewhere in the middle targeted by design as a system interconnect local bus rather than a device interconnect or routed network protocol Additionally its design goal of software transparency constrains the protocol and raises its latency somewhat citation needed Delays in PCIe 4 0 implementations led to the Gen Z consortium the CCIX effort and an open Coherent Accelerator Processor Interface CAPI all being announced by the end of 2016 142 On 11 March 2019 Intel presented Compute Express Link CXL a new interconnect bus based on the PCI Express 5 0 physical layer infrastructure The initial promoters of the CXL specification included Alibaba Cisco Dell EMC Facebook Google HPE Huawei Intel and Microsoft 143 Integrators list EditThe PCI SIG Integrators List lists products made by PCI SIG member companies that have passed compliance testing The list include switches bridges NICs SSDs etc 144 See also Edit Electronics portalActive State Power Management ASPM Peripheral Component Interconnect PCI configuration space PCI X PCI 104 Express PCIe 104 Root complex Serial Digital Video Out SDVO List of device bit rates Main buses UCIeNotes Edit Switches can create multiple endpoints out of one to allow sharing it with multiple devices The card s Serial ATA power connector is present because the USB 3 0 ports require more power than the PCI Express bus can supply More often a 4 pin Molex power connector is used References Edit Mayhew D Krishnan V August 2003 PCI express and advanced switching Evolutionary path to building next generation interconnects 11th Symposium on High Performance Interconnects 2003 Proceedings pp 21 29 doi 10 1109 CONECT 2003 1231473 ISBN 0 7695 2012 X S2CID 7456382 Definition of PCI Express PCMag Zhang Yanmin Nguyen T Long June 2007 Enable PCI Express Advanced Error Reporting in the Kernel PDF Proceedings of the Linux Symposium Fedora project Archived from the original PDF on 10 March 2016 Retrieved 8 May 2012 https www hyperstone com Flash Memory Form Factors The Fundamentals of Reliable Flash Storage Retrieved 19 April 2018 a b c Ravi Budruk 21 August 2007 PCI Express Basics PCI SIG Archived from the original PDF on 15 July 2014 Retrieved 15 July 2014 What are PCIe Slots and Their Uses PC Guide 101 18 May 2021 Retrieved 21 June 2021 How PCI Express Works How Stuff Works 17 August 2005 Archived from the original on 3 December 2009 Retrieved 7 December 2009 4 2 4 9 Link Width and Lane Sequence Negotiation PCI Express Base Specification Revision 2 1 4 March 2009 a b c PCI Express Architecture Frequently Asked Questions PCI SIG Archived from the original on 13 November 2008 Retrieved 23 November 2008 PCI Express Bus Interface bus Archived from the original on 8 December 2007 Retrieved 12 June 2010 32 lanes are defined by the PCIe Base Specification up to PCIe 5 0 but there s no card standard in the PCIe Card Electromechanical Specification and that lane number was never implemented PCI Express An Overview of the PCI Express Standard Developer Zone National Instruments 13 August 2009 Archived from the original on 5 January 2010 Retrieved 7 December 2009 Qazi Atif What are PCIe Slots PC Gear Lab Retrieved 8 April 2020 New PCIe Form Factor Enables Greater PCIe SSD Adoption NVM Express 12 June 2012 Archived from the original on 6 September 2015 Memblaze PBlaze4 AIC NVMe SSD Review StorageReview 21 December 2015 July 2015 Kane Fulton 20 20 July 2015 19 graphics cards that shaped the future of gaming TechRadar Leadbetter Richard 16 September 2020 Nvidia GeForce RTX 3080 review welcome to the next level Eurogamer Sapphire Radeon RX 5700 XT Pulse Review bit tech net bit tech net Retrieved 26 August 2019 AMD Radeon RX 5700 XT 8GB GDDR6 THICC II RX 57XT8DFD6 xfxforce com Retrieved 25 August 2019 ROG Strix GeForce RTX 3080 OC Edition 10GB GDDR6X Graphics Cards rog asus com What is the A side B side configuration of PCI cards Frequently Asked Questions Adex Electronics 1998 Archived from the original on 2 November 2011 Retrieved 24 October 2011 a b PCI Express Card Electromechanical Specification Revision 2 0 PCI Express Card Electromechanical Specification Revision 4 0 Version 1 0 Clean L1 PM Substates with CLKREQ Revision 1 0a PDF PCI SIG Retrieved 8 November 2018 Emergency Power Reduction Mechanism with PWRBRK Signal ECN PDF PCI SIG Archived from the original PDF on 9 November 2018 Retrieved 8 November 2018 PCI Express Card Electromechanical Specification Revision 1 1 Schoenborn Zale 2004 Board Design Guidelines for PCI Express Architecture PDF PCI SIG pp 19 21 archived PDF from the original on 27 March 2016 PCI Express Base Specification Revision 1 1 Page 332 Where Does PCIe Cable Go 16 January 2022 Retrieved 10 June 2022 a b Mini Fit PCI Express Wire to Board Connector System PDF Retrieved 4 December 2020 PCI Express x16 Graphics 150W ATX Specification Revision 1 0 PCI Express 225 W 300 W High Power Card Electromechanical Specification Revision 1 0 PCI Express Card Electromechanical Specification Revision 3 0 Yun Ling 16 May 2008 PCIe Electromechanical Updates Archived from the original on 5 November 2015 Retrieved 7 November 2015 MP1 Mini PCI Express PCI Express Adapter hwtools net 18 July 2014 Archived from the original on 3 October 2014 Retrieved 28 September 2014 mSATA FAQ A Basic Primer Notebook review Archived from the original on 12 February 2012 Eee PC Research ivc wiki Archived from the original on 30 March 2010 Retrieved 26 October 2009 Desktop Board Solid state drive SSD compatibility Intel Archived from the original on 2 January 2016 How to distinguish the differences between M 2 cards Dell US www dell com Retrieved 24 March 2020 PCI Express External Cabling 1 0 Specification Archived from the original on 10 February 2007 Retrieved 9 February 2007 PCI Express External Cabling Specification Completed by PCI SIG PCI SIG 7 February 2007 Archived from the original on 26 November 2013 Retrieved 7 December 2012 a b c OCuLink connectors and cables support new PCIe standard www connectortips com Archived from the original on 13 March 2017 Mokosiy Vitaliy 9 October 2020 Untangling terms M 2 NVMe USB C SAS PCIe U 2 OCuLink Medium Retrieved 26 March 2021 Supermicro Universal I O UIO Solutions Supermicro com Archived from the original on 24 March 2014 Retrieved 24 March 2014 Get ready for M PCIe testing PC board design EDN a b PCI SIG discusses M PCIe oculink amp 4th gen PCIe The Register UK 13 September 2013 archived from the original on 29 June 2017 PCI Express 4 0 Frequently Asked Questions pcisig com PCI SIG Archived from the original on 18 May 2014 Retrieved 18 May 2014 a b c PCI Express 3 0 Frequently Asked Questions pcisig com PCI SIG Archived from the original on 1 February 2014 Retrieved 1 May 2014 What does GT s mean anyway TM World Archived from the original on 14 August 2012 Retrieved 7 December 2012 Deliverable 12 2 SE Eiscat Archived from the original on 17 August 2010 Retrieved 7 December 2012 PCI SIG archived from the original on 6 July 2008 PCI Express Base 2 0 specification announced PDF Press release PCI SIG 15 January 2007 Archived from the original PDF on 4 March 2007 Retrieved 9 February 2007 note that in this press release the term aggregate bandwidth refers to the sum of incoming and outgoing bandwidth using this terminology the aggregate bandwidth of full duplex 100BASE TX is 200 Mbit s Smith Tony 11 October 2006 PCI Express 2 0 final draft spec published The Register Archived from the original on 29 January 2007 Retrieved 9 February 2007 Key Gary Fink Wesley 21 May 2007 Intel P35 Intel s Mainstream Chipset Grows Up AnandTech Archived from the original on 23 May 2007 Retrieved 21 May 2007 Huynh Anh 8 February 2007 NVIDIA MCP72 Details Unveiled AnandTech Archived from the original on 10 February 2007 Retrieved 9 February 2007 Intel P35 Express Chipset Product Brief PDF Intel Archived PDF from the original on 26 September 2007 Retrieved 5 September 2007 Hachman Mark 5 August 2009 PCI Express 3 0 Spec Pushed Out to 2010 PC Mag Archived from the original on 7 January 2014 Retrieved 7 December 2012 PCI Express 3 0 Bandwidth 8 0 Gigatransfers s ExtremeTech 9 August 2007 Archived from the original on 24 October 2007 Retrieved 5 September 2007 PCI Special Interest Group Publishes PCI Express 3 0 Standard X bit labs 18 November 2010 Archived from the original on 21 November 2010 Retrieved 18 November 2010 PCIe 3 1 and 4 0 Specifications Revealed eteknix com July 2013 Archived from the original on 1 February 2016 Trick or Treat PCI Express 3 1 Released synopsys com Archived from the original on 23 March 2015 PCI Express 4 0 evolution to 16 GT s twice the throughput of PCI Express 3 0 technology press release PCI SIG 29 November 2011 Archived from the original on 23 December 2012 Retrieved 7 December 2012 Frequently Asked Questions PCI SIG pcisig com Archived from the original on 20 October 2016 PCIe 4 0 Heads to Fab 5 0 to Lab EE Times 26 June 2016 Archived from the original on 28 August 2016 Retrieved 27 August 2016 Mellanox Announces ConnectX 5 the Next Generation of 100G InfiniBand and Ethernet Smart Interconnect Adapter NVIDIA www mellanox com Mellanox Announces 200Gb s HDR InfiniBand Solutions Enabling Record Levels of Performance and Scalability NVIDIA www mellanox com IDF PCIe 4 0 lauft PCIe 5 0 in Arbeit Heise Online in German 18 August 2016 Archived from the original on 19 August 2016 Retrieved 18 August 2016 Brian Thompto POWER9 Processor for the Cognitive Era 2016 IEEE Hot Chips 28 Symposium HCS 21 23 Aug 2016 a b Born Eric 8 June 2017 PCIe 4 0 specification finally out with 16 GT s on tap Tech Report Archived from the original on 8 June 2017 Retrieved 8 June 2017 IBM Unveils Most Advanced Server for AI www 03 ibm com 5 December 2017 IBM Power System AC922 8335 GTG server helps you to harness breakthrough accelerated AI HPDA and HPC performance for faster time to insight IBM Europe Hardware Announcement ZG17 0147 NETINT Introduces Codensity with Support for PCIe 4 0 NETINT Technologies NETINT Technologies 17 July 2018 Retrieved 28 September 2018 Mujtaba Hassan 9 January 2019 AMD Ryzen 3000 Series CPUs Based on Zen 2 Launching in Mid of 2019 Alcorn Paul 3 June 2019 AMD Nixes PCIe 4 0 Support on Older Socket AM4 Motherboards Here s Why Tom s Hardware Archived from the original on 10 June 2019 Retrieved 10 June 2019 Alcorn Paul 10 January 2019 PCIe 4 0 May Come to all AMD Socket AM4 Motherboards Updated Tom s Hardware Archived from the original on 10 June 2019 Retrieved 10 June 2019 Cutress Dr Ian 13 August 2020 Tiger Lake IO and Power Anandtech 1 2 3 4 5 It s Official PCIe 5 0 is Announced synopsys com www synopsys com Retrieved 7 June 2017 a href Template Cite web html title Template Cite web cite web a CS1 maint url status link PLDA Announces Availability of XpressRICH5 PCIe 5 0 Controller IP PLDA com www plda com Retrieved 28 June 2018 XpressRICH5 for ASIC PLDA com www plda com Retrieved 28 June 2018 Doubling Bandwidth in Under Two Years PCI Express Base Specification Revision 5 0 Version 0 9 is Now Available to Members pcisig com Retrieved 12 December 2018 PCIe 5 0 Is Ready For Prime Time tomshardware com 17 January 2019 Retrieved 18 January 2019 PCI SIG Achieves 32GT s with New PCI Express 5 0 Specification www businesswire com 29 May 2019 PCI Express 5 0 China stellt ersten Controller vor PC Games Hardware 18 November 2019 IBM s POWER10 Processor Hot Chips 32 August 16 18 2020 Power E1080 Enterprise server delivers a uniquely architected platform to help securely and efficiently scale core operational and AI applications in a hybrid cloud IBM Europe Hardware Announcement ZG21 0059 a b IBM Power E1080 Technical Overview and Introduction Intel Unveils 12th Gen Intel Core Launches World s Best Gaming Intel com Retrieved 16 February 2022 NVIDIA Announces Hopper Architecture the Next Generation of Accelerated Computing AMD Showcases Industry Leading Gaming Commercial and Mainstream PC Technologies at COMPUTEX 2022 AMD com Retrieved 23 May 2022 4th Gen AMD EPYC Processor Architecture AMD com Retrieved 12 November 2022 a b PCI SIG Announces Upcoming PCI Express 6 0 Specification to Reach 64 GT s www businesswire com 18 June 2019 Smith Ryan PCI Express Bandwidth to Be Doubled Again PCIe 6 0 Announced Spec to Land in 2021 www anandtech com PCI Express 6 0 Reaches Version 0 5 Ahead Of Finalization Next Year Phoronix www phoronix com Shilov Anton 4 November 2020 PCIe 6 0 Specification Hits Milestone Complete Draft Is Ready Tom s Hardware Yanes Al PCIe 6 0 Specification Version 0 9 One Step Closer to Final Release PCI SIG pcisig com Retrieved 6 October 2021 PCI SIG Releases PCIe 6 0 Specification Delivering Record Performance to Power Big Data Applications Business Wire 11 January 2022 Retrieved 16 February 2022 The Evolution of the PCI Express Specification On its Sixth Generation Third Decade and Still Going Strong Pci Sig 11 January 2022 Retrieved 16 February 2022 Debendra Das Sharma PCIe 6 0 Specification The Interconnect for I O Needs of the Future PCI SIG p 8 Archived from the original on 30 October 2021 Pushing the Envelope with PCIe 6 0 Bringing PAM4 to PCIe PDF Retrieved 16 February 2022 PowerPoint Presentation PDF Retrieved 16 February 2022 PCI SIG Announces PCI Express 7 0 Specification to Reach 128 GT s Business Wire 21 June 2022 Retrieved 25 June 2022 PLX demo shows PCIe over fiber as data center clustering interconnect Cabling install Penn Well Retrieved 29 August 2012 Introduced second generation PCI Express Gen 2 over fiber optic systems Adnaco 22 April 2011 Archived from the original on 4 October 2012 Retrieved 29 August 2012 PCIe Active Optical Cable System Archived from the original on 30 December 2014 Retrieved 23 October 2015 IBM Power Systems E870 and E880 Technical Overview and Introduction Acer Asus to Bring Intel s Thunderbolt Speed Technology to Windows PCs PC World 14 September 2011 Archived from the original on 18 January 2012 Retrieved 7 December 2012 Kevin Parrish 28 June 2013 PCIe for Mobile Launched PCIe 3 1 4 0 Specs Revealed Tom s Hardware Retrieved 10 July 2014 PCI Express 4 0 Draft 0 7 amp PIPE 4 4 Specifications What Do They Mean to Designers Synopsys Technical Article ChipEstimate com www chipestimate com Retrieved 28 June 2018 PCI Express 1x 4x 8x 16x bus pinout and wiring RU Pinouts Archived from the original on 25 November 2009 Retrieved 7 December 2009 PHY Interface for the PCI Express Architecture PDF version 2 00 ed Intel Archived from the original PDF on 17 March 2008 Retrieved 21 May 2008 PCI Express System Architecture PCI Express Architecture intel com Mechanical Drawing for PCI Express Connector Interface bus Retrieved 7 December 2007 FCi schematic for PCIe connectors PDF FCI connect Retrieved 7 December 2007 Reducing Interrupt Latency Through the Use of Message Signaled Interrupts PCI Express Base Specification Revision 3 0 Table 4 24 Computer Peripherals And Interfaces Technical Publications Pune Archived from the original on 25 February 2014 Retrieved 23 July 2009 a b c d e Lawley Jason 28 October 2014 Understanding Performance of PCI Express Systems PDF 1 2 Xilinx a href Template Cite web html title Template Cite web cite web a CS1 maint url status link NVIDIA Introduces NVIDIA Quadro Plex A Quantum Leap in Visual Computing Nvidia 1 August 2006 Archived from the original on 24 August 2006 Retrieved 14 July 2018 Quadro Plex VCS Advanced visualization and remote graphics nVidia Archived from the original on 28 April 2011 Retrieved 11 September 2010 XGP ATI AMD Archived from the original on 29 January 2010 Retrieved 11 September 2010 Fujitsu Siemens Amilo GraphicBooster External Laptop GPU Released 3 December 2008 archived from the original on 16 October 2015 retrieved 9 August 2015 DynaVivid Graphics Dock from Acer arrives in France what about the US 11 August 2010 archived from the original on 16 October 2015 retrieved 9 August 2015 Dougherty Steve 22 May 2010 MSI to showcase GUS external graphics solution for laptops at Computex TweakTown Hellstrom Jerry 9 August 2011 ExpressCard trying to pull a not so fast one PC Perspective editorial archived from the original on 1 February 2016 PE4H V3 2 PCIe x16 Adapter Hwtools net Archived from the original on 14 February 2014 Retrieved 5 February 2014 O Brien Kevin 8 September 2010 How to Upgrade Your Notebook Graphics Card Using DIY ViDOCK Notebook review archived from the original on 13 December 2013 Lal Shimpi Anand 7 September 2011 The Thunderbolt Devices Trickle In Magma s ExpressBox 3T AnandTech archived from the original on 4 March 2016 MSI GUS II external GPU enclosure with Thunderbolt The Verge hands on 10 January 2012 Archived from the original on 13 February 2012 Retrieved 12 February 2012 PCI express graphics Thunderbolt Tom s hardware 17 September 2012 M logics M link Thunderbold chassis no shipping Engadget 13 December 2012 archived from the original on 25 June 2017 Burns Chris 17 October 2017 2017 Razer Blade Stealth and Core V2 detailed SlashGear archived from the original on 17 October 2017 CompactFlash Association readies next gen XQD format promises write speeds of 125 MB s and up Engadget 8 December 2011 Archived from the original on 19 May 2014 Retrieved 18 May 2014 Zsolt Kerekes December 2011 What s so very different about the design of Fusion io s ioDrives PCIe SSDs storagesearch com Archived from the original on 23 September 2013 Retrieved 2 October 2013 Fusion io ioDrive Duo Enterprise PCIe Review storagereview com 16 July 2012 Archived from the original on 4 October 2013 Retrieved 2 October 2013 OCZ Demos 4 TiB 16 TiB Solid State Drives for Enterprise X bit labs Archived from the original on 25 March 2013 Retrieved 7 December 2012 Enabling Higher Speed Storage Applications with SATA Express SATA IO Archived from the original on 27 November 2012 Retrieved 7 December 2012 SATA M 2 Card SATA IO Archived from the original on 3 October 2013 Retrieved 14 September 2013 SCSI Express SCSI Trade Association Archived from the original on 27 January 2013 Retrieved 27 December 2012 Meduri Vijay 24 January 2011 A Case for PCI Express as a High Performance Cluster Interconnect HPCwire Archived from the original on 14 January 2013 Retrieved 7 December 2012 Evan Koblentz 3 February 2017 New PCI Express 4 0 delay may empower next gen alternatives Tech Republic Archived from the original on 1 April 2017 Retrieved 31 March 2017 Cutress Ian CXL Specification 1 0 Released New Industry High Speed Interconnect From Intel www anandtech com Retrieved 9 August 2019 Integrators List PCI SIG pcisig com Retrieved 27 March 2019 Further reading EditBudruk Ravi Anderson Don Shanley Tom 2003 Winkles Joseph Joe ed PCI Express System Architecture Mind share PC system architecture Addison Wesley ISBN 978 0 321 15630 3 1120 pp Solari Edward Congdon Brad 2003 Complete PCI Express Reference Design Implications for Hardware and Software Developers Intel ISBN 978 0 9717861 9 6 1056 pp Wilen Adam Schade Justin P Thornburg Ron April 2003 Introduction to PCI Express A Hardware and Software Developer s Guide Intel ISBN 978 0 9702846 9 3 325 pp External links Edit Media related to PCIe at Wikimedia Commons PCI SIG Specifications Retrieved from https en wikipedia org w index php title PCI Express amp oldid 1131965058, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.