Radeon X1000 series

Source: Wikipedia, the free encyclopedia.
(Redirected from Radeon X1000 Series)

ATI Radeon X1000 series
Release dateOctober 5, 2005; 18 years ago (October 5, 2005)
CodenameFudo (R520)
Rodin (R580)
ArchitectureRadeon R500
Transistors107M 90nm (RV505)
  • 107M 90nm (RV515)
  • 105M 90nm (RV516)
  • 157M 90nm (RV530)
  • 312M 90nm (R520)
  • 384M 90nm (R580)
  • 384M 90nm (R580+)
  • 157M 80nm (RV535)
  • 312M 80nm (RV560)
  • 312M 80nm (RV570)
Cards
Entry-levelX1300, X1550
Mid-rangeX1600, X1650
High-endX1800, X1900
EnthusiastX1950
API support
DirectXDirect3D 9.0c
Shader Model 3.0
OpenGLOpenGL 2.0
History
PredecessorRadeon X800 series
SuccessorRadeon HD 2000 series
Support status
Unsupported

The R520 (codenamed Fudo) is a graphics processing unit (GPU) developed by ATI Technologies and produced by TSMC. It was the first GPU produced using a 90 nm photolithography process.

The R520 is the foundation for a line of DirectX 9.0c and OpenGL 2.0 3D accelerator X1000 video cards. It is ATI's first major architectural overhaul since the R300 and is highly optimized for Shader Model 3.0. The Radeon X1000 series using the core was introduced on October 5, 2005, and competed primarily against Nvidia's GeForce 7 series. ATI released the successor to the R500 series with the R600 series on May 14, 2007.

ATI does not provide official support for any X1000 series cards for Windows 8 or Windows 10; the last AMD Catalyst for this generation is the 10.2 from 2010 up to Windows 7.[1] AMD stopped providing drivers for Windows 7 for this series in 2015.[2]

A series of open source Radeon drivers are available when using a Linux distribution.

The same GPUs are also found in some AMD FireMV products targeting multi-monitor set-ups.

Delay during the development

The Radeon X1800 video cards that included an R520 were released with a delay of several months because ATI engineers discovered a bug within the GPU in a very late stage of development. This bug, caused by a faulty 3rd party 90 nm chip design library, greatly hampered clock speed ramping, so they had to "respin" the chip for another revision (a new GDSII had to be sent to TSMC). The problem had been almost random in how it affected the prototype chips, making it difficult to identify.

Architecture

The R520 architecture is referred to by ATI as an "Ultra Threaded Dispatch Processor", which refers to ATI's plan to boost the efficiency of their GPU, instead of going with a brute force increase in the number of processing units. A central pixel shader "dispatch unit" breaks shaders down into threads (batches) of 16 pixels (4×4) and can track and distribute up to 128 threads per pixel "quad" (4 pipelines each). When a shader quad becomes idle due to a completion of a task or waiting for other data, the dispatch engine assigns the quad with another task to do in the meantime. The overall result is theoretically a greater utilization of the shader units. With a large number of threads per quad, ATI created a very large processor register array that is capable of multiple concurrent reads and writes, and has a high-bandwidth connection to each shader array, providing the temporary storage necessary to keep the pipelines fed by having work available as much as possible. With chips such as RV530 and R580, where the number of shader units per pipeline triples, the efficiency of pixel shading drops off slightly because these shaders still have the same level of threading resources as the less endowed RV515 and R520.[3]

The next major change to the core is to its memory bus. R420 and R300 had nearly identical memory controller designs, with the former being a bug fixed release designed for higher clock speeds. R520's memory bus differs with its central controller (arbiter) that connects to the "memory clients". Around the chip are two 256-bit ring buses running at the same speed as the DRAM chips, but in opposite directions to reduce latency. Along these ring buses are four "stop" points where data exits the ring and goes into or out of the memory chips. There is a fifth, significantly less complex stop that is designed for the PCI Express interface and video input. This design allows memory accesses to be quicker though lower latency from the smaller distance the signals need to move through the GPU, and by increasing the number of banks per DRAM. The chip can spread out memory requests faster and more directly to the RAM chips. ATI claimed a 40% improvement in efficiency over older designs. Smaller cores such as RV515 and RV530 received cutbacks due to their smaller, less costly designs. RV530, for example, has two internal 128-bit buses instead. This generation has support for all recent memory types, including GDDR4. In addition to a ring bus, each memory channel has the granularity of 32-bits, which improves memory efficiency when performing small memory requests.[3]

The vertex shader engines were already at the required FP32 precision in ATI's older products. Changes necessary for SM3.0 included longer instruction lengths, dynamic flow control instructions, with branches, loops and subroutines and a larger temporary register space. The pixel shader engines are actually quite similar in computational layout to their R420 counterparts, although they were heavily optimized and tweaked to reach high clock speeds on the 90 nm process. ATI has been working for years on a high-performance shader compiler in their driver for their older hardware, so staying with a similar basic design that is compatible offered obvious cost and time savings.[3]

At the end of the pipeline, the texture addressing processors are decoupled from pixel shaders, so any unused texturing units can be dynamically allocated to pixels that need more texture layers. Other improvements include 4096x4096 texture support and ATI's 3Dc normal map compression saw an improvement in compression ratio for more specific situations.[3]

The R5xx family introduced a more advanced onboard motion-video engine. Like the Radeon cards since the R100, the R5xx can offload almost the entire MPEG-1/2 video pipeline. The R5xx can also assist in Microsoft WMV9/VC-1 and MPEG H.264/AVC decoding, by a combination of the 3D/pipeline's shader-units and the motion-video engine. Benchmarks show only a modest decrease in CPU-utilization for VC-1 and H.264 playback.

A selection of real-time 3D demonstration programs was released at launch. ATI's development of their "digital superstar", Ruby, continued with a new demo named The Assassin. It showcased a highly complex environment, with high-dynamic-range lighting (HDR) and dynamic soft shadows. Ruby's latest competing program, Cyn, was composed of 120,000 polygons.[4]

The cards support dual-link DVI output and HDCP. However, using HDCP requires external ROM to be installed, which were not available for early models of the video cards. RV515, RV530, and RV535 cores include a single and a double DVI link; R520, RV560, RV570, R580, R580+ cores include two double DVI links.

AMD released the final Radeon R5xx Acceleration document.[5]

Drivers

The last AMD Catalyst version that officially supports the X1000 series is 10.2, display driver version 8.702.

Variants

X1300–X1550 series

X1300 with GPU RV515 (heat sink removed)

This series is the budget solution of the X1000 series and is based on the RV515 core. The chips have four texture units, four ROPs, four pixel shaders, and 2 vertex shaders, similar to the older X300 – X600 cards. These chips use one quad of an R520, whereas the faster boards use just more of these quads; for example, the X1800 uses four quads. This modular design allows ATI to build a "top to bottom" line-up using identical technology, saving research, development time, and money. Because of its smaller design, these cards offer lower power demands (30 watts), so they run cooler and can be used in smaller cases.[3] Eventually, ATI created the X1550 and discontinued the X1300. The X1050 was based on the R300 core and was sold as an ultra-low-budget part.

Early Mobility Radeon X1300 to X1450 are based around the RV515 core as well.[6][7][8][9]

Beginning in 2006, Radeon X1300 and X1550 products were shifted to the RV505 core, which had similar capabilities and features as the previous RV515 core, but was manufactured by TSMC using an 80 nm process (reduced from the 90 nm process of the RV515).[10]

X1600 series

X1600 uses the M56[1] core which is based on the RV530 core, a core similar but distinct from RV515.

The RV530 has a 3:1 ratio of pixel shaders to texture units. It possesses 12 pixel shaders while retaining RV515's four texture units and four ROPs. It also gains three extra vertex shaders, bringing the total to 5 units. The chip's single "quad" has 3 pixel shader processors per pipeline, similar to the design of R580's 4 quads. This means that RV530 has the same texturing ability as the X1300 at the same clock speed, but with its 12 pixel shaders it is on par with the X1800 in shader computational performance. Due to the programming content of available games, the X1600 is greatly hampered by lack of texturing power.[3]

The X1600 was positioned to replace Radeon X600 and Radeon X700 as ATI's mid-range GPU. The Mobility Radeon X1600 and X1700 are also based on the RV530.[11][12]

X1650 series

ATI Radeon X1650 Pro

The X1650 series has two parts: the X1650 Pro uses the RV535 core (which is a RV530 core manufactured on the newer 80 nm process), and has both a lower power consumption and heat output than the X1600.[13] The other part, the X1650XT, uses the newer RV570 core (also known as the RV560) though it has lower processing power (note that the fully equipped RV570 core powers the X1950Pro, a high-performance card) to match its main competitor, Nvidia's 7600GT.[14]

X1800 series

Originally the flagship of the X1000 series, the X1800 series was released with mild reception due to the rolling release and the gain by its competitor at that time, NVIDIA's GeForce 7 series. When the X1800 entered the market in late 2005, it was the first high-end video card with a 90 nm GPU. ATI opted to fit the cards with either 256 MB or 512 MB on-board memory (foreseeing a future of ever growing demands on local memory size). The X1800XT PE was exclusively on 512 MB on-board memory. The X1800 replaced the R480-based Radeon X850 as ATI's premier performance GPU.[3]

With R520's delayed release, its competition was far more impressive than if the chip had made its originally scheduled spring/summer release. Like its predecessor, the X850, the R520 chip carries 4 "quads", which means it has similar texturing capability at the same clock speed as its ancestor and the NVIDIA 6800 series. Unlike the X850, the R520's shader units are vastly improved: they are Shader Model 3 capable, and received some advancements in shader threading that can greatly improve the efficiency of the shader units. Unlike the X1900, the X1800 has 16 pixel shader processors and equal ratio of texturing to pixel shading capability. The chip also increases the vertex shader number from six on the X800 to eight. With the 90 nm low-K fabrication process, these high-transistor chips could still be clocked at very high frequencies, which allows the X1800 series to be competitive with GPUs with more pipelines but lower clock speeds, such as the NVIDIA 7800 and 7900 series that use 24 pipelines.[3]

The X1800 was quickly replaced by the X1900 because of its delayed release. The X1900 was not behind schedule, and was always planned as the "spring refresh" chip. However, due to the large quantity of unused X1800 chips, ATI decided to kill one quad of pixel pipelines and sell them off as the X1800GTO.

X1900 and X1950 series

Sapphire Radeon X1950 Pro

The X1900 and X1950 series fixed several flaws in the X1800 design and added a significant pixel shading performance boost. The R580 core is pin-compatible with the R520 PCBs, which meant a redesign of the X1800 PCB was not needed. The boards carry either 256 MB or 512 MB of onboard GDDR3 memory depending on the variant. The primary change between the R580 and the R520 is that ATI changed the pixel shader processor-to-texture processor ratio. The X1900 cards have three pixel shaders on each pipeline instead of one, giving a total of 48 pixel shader units. ATI took this step with the expectation that future 3D software will be more pixel shader intensive.[15]

In the latter half of 2006, ATI introduced the Radeon X1950 XTX, which is a graphics board using a revised R580 GPU called R580+. R580+ is the same as R580 except it supports GDDR4 memory, a new graphics DRAM technology that offers lower power consumption per clock and offers a significantly higher clock rate ceiling. The X1950 XTX clocks its RAM at 1 GHz (2 GHz DDR), providing 64.0 GB/s of memory bandwidth, a 29% advantage over the X1900 XTX. The card was launched on August 23, 2006.[16]

The X1950 Pro was released on October 17, 2006, and was intended to replace the X1900GT in the competitive sub-$200 market segment. The X1950 Pro GPU is built off of the 80 nm RV570 core with only 12 texture units and 36 pixel shaders, and is the first ATI card that supports native Crossfire implementation by a pair of internal Crossfire connectors, which eliminates the need for the unwieldy external dongle found in older Crossfire systems.[17]

Radeon feature matrix

The following table shows features of AMD/ATI's GPUs (see also: List of AMD graphics processing units).

Name of GPU series Wonder Mach 3D Rage Rage Pro Rage 128 R100 R200 R300 R400 R500 R600 RV670 R700 Evergreen Northern
Islands
Southern
Islands
Sea
Islands
Volcanic
Islands
Arctic
Islands
/Polaris
Vega Navi 1x Navi 2x Navi 3x
Released 1986 1991 Apr
1996
Mar
1997
Aug
1998
Apr
2000
Aug
2001
Sep
2002
May
2004
Oct
2005
May
2007
Nov
2007
Jun
2008
Sep
2009
Oct
2010
Jan
2012
Sep
2013
Jun
2015
Jun 2016, Apr 2017, Aug 2019 Jun 2017, Feb 2019 Jul
2019
Nov
2020
Dec
2022
Marketing Name Wonder Mach 3D
Rage
Rage
Pro
Rage
128
Radeon
7000
Radeon
8000
Radeon
9000
Radeon
X700/X800
Radeon
X1000
Radeon
HD 2000
Radeon
HD 3000
Radeon
HD 4000
Radeon
HD 5000
Radeon
HD 6000
Radeon
HD 7000
Radeon
200
Radeon
300
Radeon
400/500/600
Radeon
RX Vega, Radeon VII
Radeon
RX 5000
Radeon
RX 6000
Radeon
RX 7000
AMD support Ended Current
Kind 2D 3D
Instruction set architecture Not publicly known TeraScale instruction set GCN instruction set RDNA instruction set
Microarchitecture TeraScale 1
(VLIW)
TeraScale 2
(VLIW5)
TeraScale 2
(VLIW5)

up to 68xx
TeraScale 3
(VLIW4)

in 69xx [18][19]
GCN 1st
gen
GCN 2nd
gen
GCN 3rd
gen
GCN 4th
gen
GCN 5th
gen
RDNA RDNA 2 RDNA 3
Type Fixed pipeline[a] Programmable pixel & vertex pipelines Unified shader model
Direct3D 5.0 6.0 7.0 8.1 9.0
11 (9_2)
9.0b
11 (9_2)
9.0c
11 (9_3)
10.0
11 (10_0)
10.1
11 (10_1)
11 (11_0) 11 (11_1)
12 (11_1)
11 (12_0)
12 (12_0)
11 (12_1)
12 (12_1)
11 (12_1)
12 (12_2)
Shader model 1.4 2.0+ 2.0b 3.0 4.0 4.1 5.0 5.1 5.1
6.5
6.7
OpenGL 1.1 1.2 1.3 2.1[b][20] 3.3 4.5 (on Linux: 4.5 (Mesa 3D 21.0))[21][22][23][c] 4.6 (on Linux: 4.6 (Mesa 3D 20.0))
Vulkan 1.0
(Win 7+ or Mesa 17+)
1.2 (Adrenalin 20.1.2, Linux Mesa 3D 20.0)
1.3 (GCN 4 and above (with Adrenalin 22.1.2, Mesa 22.0))
1.3
OpenCL Close to Metal 1.1 (no Mesa 3D support) 1.2+ (on Linux: 1.1+ (no Image support on clover, with by rustiCL) with Mesa 3D, 1.2+ on GCN 1.Gen) 2.0+ (Adrenalin driver on Win7+)
(on Linux ROCM, Linux Mesa 3D 1.2+ (no Image support in clover, but in rustiCL with Mesa 3D, 2.0+ and 3.0 with AMD drivers or AMD ROCm), 5th gen: 2.2 win 10+ and Linux RocM 5.0+
2.2+ and 3.0 windows 8.1+ and Linux ROCM 5.0+ (Mesa 3D rustiCL 1.2+ and 3.0 (2.1+ and 2.2+ wip))[24][25][26]
HSA / ROCm Yes ?
Video decoding ASIC Avivo/UVD UVD+ UVD 2 UVD 2.2 UVD 3 UVD 4 UVD 4.2 UVD 5.0 or 6.0 UVD 6.3 UVD 7 [27][d] VCN 2.0 [27][d] VCN 3.0 [28] VCN 4.0
Video encoding ASIC VCE 1.0 VCE 2.0 VCE 3.0 or 3.1 VCE 3.4 VCE 4.0 [27][d]
Fluid Motion [e] No Yes No ?
Power saving ? PowerPlay PowerTune PowerTune & ZeroCore Power ?
TrueAudio Via dedicated DSP Via shaders
FreeSync 1
2
HDCP[f] ? 1.4 2.2 2.3 [29]
PlayReady[f] 3.0 No 3.0
Supported displays[g] 1–2 2 2–6 ?
Max. resolution ? 2–6 ×
2560×1600
2–6 ×
4096×2160 @ 30 Hz
2–6 ×
5120×2880 @ 60 Hz
3 ×
7680×4320 @ 60 Hz [30]

7680×4320 @ 60 Hz PowerColor
7680x4320

@165 HZ

/drm/radeon[h] Yes
/drm/amdgpu[h] Experimental [31] Optional [32] Yes
  1. ^ The Radeon 100 Series has programmable pixel shaders, but do not fully comply with DirectX 8 or Pixel Shader 1.0. See article on R100's pixel shaders.
  2. ^ R300, R400 and R500 based cards do not fully comply with OpenGL 2+ as the hardware does not support all types of non-power of two (NPOT) textures.
  3. ^ OpenGL 4+ compliance requires supporting FP64 shaders and these are emulated on some TeraScale chips using 32-bit hardware.
  4. ^ a b c The UVD and VCE were replaced by the Video Core Next (VCN) ASIC in the Raven Ridge APU implementation of Vega.
  5. ^ Video processing for video frame rate interpolation technique. In Windows it works as a DirectShow filter in your player. In Linux, there is no support on the part of drivers and / or community.
  6. ^ a b To play protected video content, it also requires card, operating system, driver, and application support. A compatible HDCP display is also needed for this. HDCP is mandatory for the output of certain audio formats, placing additional constraints on the multimedia setup.
  7. ^ More displays may be supported with native DisplayPort connections, or splitting the maximum resolution between multiple monitors with active converters.
  8. ^ a b DRM (Direct Rendering Manager) is a component of the Linux kernel. AMDgpu is the Linux kernel module. Support in this table refers to the most current version.

Chipset table

See also

References

  1. ^ "Radeon X1K Real-Time Demos". Archived from the original on May 7, 2009.
  2. ^ "Download AMD Drivers".
  3. ^ a b c d e f g h Wasson, Scott. ATI's Radeon X1000 series graphics processors, Tech Report, October 5, 2005.
  4. ^ "AMD Catalyst™ Display Driver".
  5. ^ Advanced Micro Devices, Inc. Radeon R5xx Acceleration v. 1.5, AMD website, October 2013.
  6. ^ Mobility Radeon X1300 Archived May 9, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
  7. ^ Mobility Radeon X1350 Archived March 25, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
  8. ^ Mobility Radeon X1400 Archived June 15, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
  9. ^ Mobility Radeon X1450 Archived June 3, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
  10. ^ The Inquirer, 16 November 2006: AMD samples 80nm RV505CE – finally (cited February 4, 2011)
  11. ^ Mobility Radeon X1700 Archived May 26, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
  12. ^ Mobility Radeon X1600 Archived June 22, 2007, at the Wayback Machine, ATI. Retrieved June 8, 2007.
  13. ^ Hanners. PowerColor Radeon X1650 PRO video card review, Elite Bastards, August 27, 2006.
  14. ^ Wasson, Scott. ATI's Radeon X1650 XT graphics card, Tech Report, October 30, 2006.
  15. ^ Wasson, Scott. ATI's Radeon X1900 series graphics cards, Tech Report, January 24, 2006.
  16. ^ Wasson, Scott. ATI's Radeon X1950 XTX and CrossFire Edition graphics cards, Tech Report, August 23, 2006.
  17. ^ Wilson, Derek. ATI Radeon X1950 Pro: CrossFire Done Right, AnandTech, October 17, 2006.
  18. ^ "AMD Radeon HD 6900 (AMD Cayman) series graphics cards". HWlab. hw-lab.com. December 19, 2010. Archived from the original on August 23, 2022. Retrieved August 23, 2022. New VLIW4 architecture of stream processors allowed to save area of each SIMD by 10%, while performing the same compared to previous VLIW5 architecture
  19. ^ "GPU Specs Database". TechPowerUp. Retrieved August 23, 2022.
  20. ^ "NPOT Texture (OpenGL Wiki)". Khronos Group. Retrieved February 10, 2021.
  21. ^ "AMD Radeon Software Crimson Edition Beta". AMD. Retrieved April 20, 2018.
  22. ^ "Mesamatrix". mesamatrix.net. Retrieved April 22, 2018.
  23. ^ "RadeonFeature". X.Org Foundation. Retrieved April 20, 2018.
  24. ^ "AMD Radeon RX 6800 XT Specs". TechPowerUp. Retrieved January 1, 2021.
  25. ^ "AMD Launches The Radeon PRO W7500/W7600 RDNA3 GPUs". Phoronix. August 3, 2023. Retrieved September 4, 2023.
  26. ^ "AMD Radeon Pro 5600M Grafikkarte". TopCPU.net (in German). Retrieved September 4, 2023.
  27. ^ a b c Killian, Zak (March 22, 2017). "AMD publishes patches for Vega support on Linux". Tech Report. Retrieved March 23, 2017.
  28. ^ Larabel, Michael (September 15, 2020). "AMD Radeon Navi 2 / VCN 3.0 Supports AV1 Video Decoding". Phoronix. Retrieved January 1, 2021.
  29. ^ Edmonds, Rich (February 4, 2022). "ASUS Dual RX 6600 GPU review: Rock-solid 1080p gaming with impressive thermals". Windows Central. Retrieved November 1, 2022.
  30. ^ "Radeon's next-generation Vega architecture" (PDF). Radeon Technologies Group (AMD). Archived from the original (PDF) on September 6, 2018. Retrieved June 13, 2017.
  31. ^ Larabel, Michael (December 7, 2016). "The Best Features of the Linux 4.9 Kernel". Phoronix. Retrieved December 7, 2016.
  32. ^ "AMDGPU". Retrieved December 29, 2023.

External links