Foundations of Display Technology: The Pixel
Part 2: Basic principles – The pixel as a fundamental unit
A pixel is many things to many audiences with a fragmented definition covering content creation, data transmission, display control, display engineering, signal processing, popular culture, digital art, historical art, and corporate branding. This article constrains itself to a high level look at the pixel from the point of view of users of display technology.
A pixel is a single point expressed in technology as an electronic data source translated into digestible information with optical and/or physical characteristics
The End
This brief definition exists within these cultural and technical uses of the term. In this instance the word is out in the world as the name of a dog, a phone, a payment service, a movie, a bar, but sadly not a small town. Nevertheless understanding the origin of the term and the different ways that the term is used may be helpful.
Cultural and historical context of the pixel
The pixel has been described as a fundamental unit of modern communication and to this end pixels will always be specific to points in history. Pixels are the movable type of this moment in time, an expression of technology interacting with the cultural priorities of the moment. A pixel from 2003 is not the same as a pixel in 2023. This is both a qualitative judgment and an aesthetic statement. In that 20 year period we have advances in TFTs, display glass, liquid crystal materials, display films, the arrival of OLED, quantum dots, and the early examples of microLED displays all of which mean that a website built in 2003 will look different today and that a film made in 1983 will look different in 2003 and also different from today.
This article will not dive deeply into cultural priorities nor will it address the role of the pixel as a cultural phenomenon expressed through bit maps and chunky pixel based art however it is important to understand these things as they all connect to how we define the pixel. Those specifically interested in pixel art should look at the work of people like Susan Kare, Kim Asendorf, John F. Simon Jr, Charlotte Johannesson, Takashi Komiyama (hermippe), eBoy, and Jodi — a list that does not scratch the surface. The Charlotte Kent piece (https://www.rightclicksave.com/article/pixel-art-and-the-age-of-technostalgia) covering a recent pixel art exhibit at Unit London is a good introduction, and includes commentary by the artists involved in that exhibition.
Work produced on the Quantel Paintbox is sometimes referenced as pixel art but the output of the original Paintbox was, at best, analog component video. This analog format combined with the available broadcast display technology would not have allowed for a fixed knowable point to be created on a screen. This is not dissimilar to conversations about the differences between deterministic systems and nondeterministic systems. For the Quantel Paintbox the output might be slightly different on several monitors in the same room. The characterization of work produced on a Paintbox is part of a tendency to project the architecture of modern displays back into the past. At the same time concepts that developed alongside the Paintbox, the video Fairlight (the Fairlight CVI), and other content creation platforms and the video processing that managed images intended to be displayed via continuously scanning electron beams are actively in use in modern displays just as elements of Pulse Code Modulation and sampling theory form the foundation of modern data distribution.
In historic terms a dot represents a sample that is displayed at a location within a raster on a phosphor covered surface based on the voltage and timing and the tolerances of the available components at the time of manufacture1. The electron beam’s location exists as a continuous coordinate within the scanning structure, not as discrete pixels. Horizontal position is determined by the instantaneous phase relationship within the 15.734 kHz line scan (NTSC), while vertical position corresponds to “the moment” within the 59.94 Hz field scan. The design of the CRT with the deflection yoke provides theoretically infinite positioning resolution limited only by beam spot size (~0.3-0.8mm diameter) and phosphor grain structure. Theoretically infinite positioning is, oddly enough, not just a feature — any clown with a tweaker could change the physical size or position of the raster on a display. Adjusting monitors in an array so that they would match was always a task that was abandoned rather than completed.
In modern display terms information is delivered through a X,Y matrix of picture elements that may be composed of emissive, reflective, transmissive, or tactile outputs. Braille displays of information rely on a tactile matrix that can be updated. In most cases emissive, reflective, transflective, and transmissive systems are using the modulation of light to create an image.
Technical definitions and characteristics
A pixel is a defined point of electro-optical transformation where a part of the information is converted into light that is complete as an addressable part of a larger region of visual information. This means that a single pixel should have all of the color, luminance, spatial, and temporal information that defines that element within a larger matrix of information. The pixels perform the role of converting information into light or sensation.
We characterize pixels in many different ways.
- Resolution – How many pixels are in the display
- Pixel Pitch – The distance from the center of one pixel to the center of the next pixel.
- PPI – How many pixels are in a defined unit of space
- Fill Factor – What percentage of the surface area is dedicated to modulating light/space, a factor that has many knock on effects in contrast, moire, ambient light rejection.
- Nits/Luminance/Illuminance – How bright is the screen in emitted, transmitted, or reflected light
- Chromaticity – How much of some arbitrary standard do the pixels reproduce, typically listed as a percentage
- Refresh rate – At what frequency will the pixels refresh the image?
- Scan Multiplexing rate – At what frequency is each pixel refreshed or how many pixels are part of a scanning or multiplexing group.
- Saltiness – Did the pixel stop and make you think?
While a pixel represents a complete physical system it is part of a larger area full of many pixels. The distance from pixel to pixel matters in the creation of an image. The ratio on the surface of a display between the pixels converting information into light and the passive areas of the display will influence the contrast of an image and how that image is processed by the eyes as well as how the image interacts with sensor matrices to form moire.
A pixel receives all of the temporal information in a display. The output of the pixels is “refreshed” at a rate that may be fixed or variable according to the input to the display. In some displays the emissive or transmissive components that form the pixel may also be modulated at a rate that is much higher than the source frame rate rate in order to set the levels for the pixels.
In these ways pixels are a specification that characterizes the image to be displayed, the component at the end of a signal chain that delivers the image, and an immediate part of an experience for the viewer.
Having digested this you should watch Pixels and Me, a lecture by Richard Lyon that was delivered at the Computer History Museum in San Jose. This lecture, along with accompanying documentation, covers the introduction of the “vile neologism” (Andrew T. Young) and the path it took to common use.
Perhaps the most fascinating detail in this lecture is how recent the common acceptance of the term is. Well into the 1970’s and 1980’s it was probably quite common to hear someone complaining about some defect in an image by pointing a “dot” or a “spot”. The January/February 1965 issue of Information Display did not have the term pixel or picture element but does mention a resolution element, a term that may have also been used in radar.

Evolution from analog to digital displays
There was no plan to “digitize the display” and the initial examples of digital displays were not great. Few would look at early passive matrix displays and say that this was a huge qualitative leap over the best monitors of the day. The evolution of the pixel is closely tied to the technology used to address the color, luminance, spatial, and temporal information required by the pixel but the evolution of the display industry as a whole is pulled along behind applications and the ability to scale.
The CRT was a profoundly analog thing and it was, at the start, also simple enough to control that various European countries each had their own standards prior the implementation of PAL and SECAM in 1967. This is years after the adoption of NTSC in the United States. It would take another fifteen years for the arrival of BT.601 and the introduction of the sampling architecture for video standards that set the stage for the broad adoption of pixels.
An electron beam (the “cathode ray”) scans continuously across the screen defining the raster, controlled by varying voltage signals that deflect the beam horizontally and vertically (with a second deflection coil at 90 degrees). An electron beam strikes the phosphor and the phosphor then re-emits that energy as light at a certain wavelength and this immediately starts to decay. This resulting dot has a Gaussian shape – not a perfect square – and its intensity is periodically modulated by the incoming signal. These driving methods also influenced the shape of pixels. As recently as the development of the HD standards there were questions about the aspect ratio of a pixel. The pixel in a CRT was roughly 0.9:1 due to a combination of legacy technology and the different PAL and NTSC standards that required different frequencies and numbers of scan lines.
In color CRTs, a shadow mask ensures the beam strikes only the appropriate colored phosphor dots. These phosphor elements are not pixels – they are physical elements excited by an analog beam that can strike at any position within manufacturing tolerances. In color displays this is partially determined by the placement of the shadow mask in relationship to the cathodes and the phosphor placement. There is no one-to-one mapping between signal elements and phosphor dots.
The NTSC, PAL, and SECAM standards defined this process in purely analog terms – specifying frequencies, timing, and encoding methods without reference to discrete sampling. The television signal itself was continuous and analog, with no inherent digital structure. This is true of both raster displays, using a continuously scanning beam with a fixed timing structure, and stroke monitors where lines are drawn directly through the movement of the electron beam across a phosphor coated surface. Stroke monitors and other special purpose displays could depart from NTSC/PAL/SECAM mandated scanning frequencies.
Text legibility and other early computing requirements also led to the development of methods of handling characters and defined objects. We are into proto-pixels. Displays designed to approximate pixels to support computer based workflows. Character Cell Displays had limited mapped outputs prescriptively defining a window in an X,Y grid of characters. This is where characters per line come into play with the characters stored in ROM as fixed values. These displays were monochrome and these define how early computing is portrayed in film and TV.
A brief interlude
As Alvy Ray Smith states: “There are no little squares involved at any step of the process. There are, as usual in imaging, overlapping shapes that serve as natural reconstruction filters.”2 The pixel, as Alvy Ray Smith’s mathematical point resides within a square boundary imposed upon it because square grids resolve a lot of problems both for display geometry and for display manufacturing. Square pixels maintain the relationships of the source material in a very simple way that can be tracked back through the image generation process and the early application of these technologies in computing and design focused applications.
This interlude relates to an extension of Smith’s comments into the creation of 3D images using 2D precursors. The blurb for the Jon Peddie post on the ACM Siggraph blog in 2021 celebrating the anniversary of the pixel says “Dive into the origins of the pixel, which is a 3D gaussian point spread and not a little colored square.” There may not be a continuous line from Smith to Kerbl et al but there are some fascinating similarities and given the historic connection between imagers and displays this bears some relevance to the future of pixels. It feels plausible that Lee Alan Westover (author of an early 1990s paper on splats) and Alvy Ray Smith met up at Siggraph and talked about points.
Smith’s 1995 paper emphasized several key ideas that echo modern Gaussian splats:
- Point-Based Representation: Smith insisted “a pixel is a point sample. It exists only at a point.” This fundamental view of discrete elements representing continuous signals is entirely consistent with the philosophical foundation of Gaussian splats 👀.
- Overlapping Reconstruction: Smith described how proper image display involves “overlapping shapes that serve as natural reconstruction filters” – explicitly mentioning Gaussian patterns as an ideal shape for this reconstruction.
- Continuous vs. Discrete: Smith rejected the discrete “little square” model in favor of a continuous representation using overlapping filters – exactly what Gaussian splats do in 3D space.
Modern 3D Gaussian Splatting (which gained prominence around 2023) builds directly on these concepts:
- It represents 3D scenes as collections of oriented 3D Gaussian distributions
- Each “splat” is a continuous function rather than a discrete voxel
- These Gaussians overlap and are blended to create a continuous representation of the scene
- When projected to the image plane, they create precisely the type of “overlapping Gaussian-shaped patterns” Smith described for proper image reconstruction
What bridges these concepts across nearly three decades is the fundamental understanding that:
- Digital imagery is fundamentally about sampling continuous signals
- Reconstruction of continuous signals from samples requires proper filtering
- Gaussian-shaped filters provide mathematically elegant reconstruction properties
- Overlapping continuous functions create higher quality results than discrete “little squares”
What Smith described as the correct theoretical foundation for understanding pixels developed in parallel to the principles required to use splats in a fully realized graphics system. The computational power that makes Gaussian splats practical today wasn’t available in the 1990s and perhaps the resolution to fully take full advantage of Gaussian splats on the display side may not exist today, but Smith’s characterization of the pixel is entirely consistent with the development of modern Gaussian splats.
Modern pixel technologies and innovations
True pixels only emerged with digital standards like Rec.601 (1982), which formally defined how to sample television signals according to the Sampling Theorem – establishing the mathematical framework that finally brought true pixels to television.
Passive matrix displays form a bridge between the analog age and the digital age. This starts out in smaller static or segment displays. The seven segment display is at the center of the growing importance of representing type via an extensible electrical display for calculations and communication. Segment displays can be found in many forms with the LED display predating the LCD display in smaller personal devices. The passive addressing of these displays is an important step between purely analog displays and the digital active matrix displays that start to emerge.
The earliest commercial passive matrix display technology was the plasma display panel. Owens-Illinois delivered the first commercial plasma display product in 1971 to the University of Illinois, featuring a 12-inch diagonal 512 x 512-pixel full-graphic display. This display took advantage of the plasma cells’ inherent memory characteristic, eliminating the need for active matrix transistors. Throughout the 1970s and early 1980s, companies including IBM, Fujitsu, NEC, and others, expanded the passive matrix plasma display market for specialized applications.
Vacuum fluorescent displays became commercially significant in the 1970s, with Noritake Itron Corporation emerging as a leading manufacturer. These displays used phosphor-coated anodes that emitted light when struck by electrons from a heated cathode within a vacuum tube structure. Noritake pioneered many VFD innovations, developing both character-segment and dot-matrix passive displays that were widely used in consumer electronics, automotive dashboards, and instrumentation panels. Other significant VFD manufacturers included Futaba, Ise Electronics (Ise-Itron), and Babcock Display Products. VFDs were popular for their high brightness, wide viewing angle, and ability to display in various colors (typically blue-green, though multi-color versions were later developed). Secretly we all want the displays in our stereos and personal electronics to be vacuum fluorescent displays. This is a safe space. You can admit this.
Passive matrix electroluminescent displays gained commercial traction in the late 1970s and early 1980s. Planar Systems, Inc. became a key manufacturer, focusing on industrial applications. These displays were particularly suitable for industrial control systems, military applications, medical equipment, and transportation displays. Sharp Corporation also manufactured electroluminescent panels for specialized applications. The technology required high voltage (120VAC @ 400Hz typically) but offered advantages in durability and environmental tolerance.
These early display technologies all have fans who are attracted to the specific visual styles of early displays. The reference to pixels being movable type clearly extends beyond the fixed definition of the pixel to the cultural significance of digital displays and the way they evoke certain periods of time. The specific glow of VFDs and EL panels and their association with vintage equipment or the way that the colors used in CGA and EGA graphics are used in digital art with work using a specific cyan or magenta from the limited palette of colors that were supported on early computers.
These displays also had shortcomings. Plasma displays were thermally inefficient and generated a lot of heat and never scaled commercially the way LCD displays scaled, leaving the market while at a qualitative peak. Electroluminescent displays required high voltage and were impractical in the areas where display technology was expanding fastest. The performance of Vacuum Fluorescent Displays failed across multiple fronts but they still look cool. These displays were the best at certain things. Electroluminescent displays were robust. Plasma displays had far better image quality, a good balance between the square matrix of the LCD and the phosphor response of a CRT. Neither display could scale in the way that the LCD would scale, however there were electroluminescent display demos with quantum dots at Display Week in May of 2025.
The super-twisted nematic (STN) LCD structure, invented in 1983 by researchers at Brown, Boveri & Cie (BBC) Research Center in Switzerland, revolutionized passive matrix addressing for LCDs. This technology enabled high-resolution displays with improved contrast and viewing angles compared to earlier twisted nematic designs. The commercial breakthrough came through BBC’s joint venture with Philips called Videlec. By 1984-85, Philips researchers had solved key technical problems, including slow response time and driving high-resolution displays with low-voltage electronics.
STN LCDs found wide commercial application in the late 1980s and early 1990s in products including the Nintendo Game Boy (introduced 1989), early cellular phones, calculator displays, and laptop computers including the early Apple PowerBook series. These displays offered a balance of power efficiency, reasonable performance, and cost effectiveness that made them the dominant technology for portable electronic devices until active matrix LCDs became more affordable in the mid-1990s. The success of passive displays in new markets (phones and laptops) helped drive the shift to active matrix displays. These markets were both much more tolerant of the shortcomings of passive matrix displays and early active matrix displays combining superior form factors and better power architecture.
The criticality of the TFT backplane in this transition along with the rapidly expanding use of LCD displays in phones, laptops, and small televisions let to growth in the LCD sector and drove the scaling of LCD fabs from Gen 1 (early 1990s), the first generation of LCD factories, with glass size of less than 1/10th of a square meter to Gen 10.5 (2017) with glass size of almost ten square meters. The photolithographic processing involves a series of masks (basically very expensive screen printing where the masks are both expensive and easily damaged) which build up the TFT layers for the panel. This process yields pixels that are tightly controlled and highly uniform. Comparing this with CRT production and the lack of control and the inability to scale the process (and the panel dimensions) become obvious. It was very clear that the CRT businesses were not going to win this battle. In the early 2000s CRT factories started to close with most completely shuttered by 2015. The ability of the fab based system to scale is a recurring part of this story and is already part of the microLED conversation.
The Industry has moved from monochrome displays to two and three color displays across multiple technology stacks. The market has also supported displays with individual pixels that are at the micron scale and individual pixels that can be measured in meters. A pixel in an LCD display may have no color filter and be a single millimeter and a pixel for an older LED display may integrate multiple discrete red, green, and blue LED packages arranged to manage color shift and deliver the amount of light required to be visible in high ambient light.
The color performance of a pixel in any given display stack is a balance between the maturity of the supply chain and the maturity of a technology. For commercial displays the ability to deliver high volumes of color filters, color materials, or phosphor at the right quality at the right price drives the selection of colors for subpixels. In the LCD supply chain there are companies that specialize in color filters (Examples are Toppan, DNP in LCD and LG Display in OLED). These companies are optimized around producing film or glass substrates or supplying materials to large fabs. These supply chains are focused on volume and in the case of films likely deliver rolls that are several kilometers long. In developing technologies it may be issues with material lifetimes or the yields on the material that determine the available options for a pixel. The LED market has been a great example of this as it has moved from single color displays, to two color displays, to four or five color displays.
Pin array based graphical displays are a key part of Braille displays both in text based arrays and in large display surfaces. In the text displays the characters are arranged as two columns of three dots. Advances in electronics and an interest in delivering more tactile feedback to other touch screen surfaces should make more rich experiences accessible to a larger audience.
Choosing a path through the history of the pixel is complicated because a pixel is an abstraction, a mathematical representation, a physical surface area in a display, an end point in a signal chain, a problem solved by multiplexing, and any number of other things. The work of Jim Campbell is an excellent way to jump into pixel architectures and it is a bridge between the world interlaced video and TVL (Television Lines) and the world of LED displays and virtual pixels. His work at Farouja Labs revolved around “line doubling” and deinterlacing and the development of improved scaling techniques for video recording and playback. Early on in his own practice he built low resolution LED pieces that appeared to have far more resolution than the twelve lines of video could deliver. As someone very wise once told me “Jim has figured out how to offload processing to the viewer”.
The critical understanding gleaned from building on top of a system based on interlacing is that temporal resource sharing worked, that an observer can fill in gaps in the information, and that edge detection and edge preservation is critical to video whether the context is text or an escalator. This directly connects with the world of alternative pixel geometries and subpixel rendering and connects them to signal processing theory (see the previous article). A physical pixel is not a city state but a tool to transfer that point to the observer. There are numerous approaches to displaying information that exist outside of a pixel designed to house all of its functions independently. Some of these are efficient because they reduce drivers or emitters and others are optimized around the available technology and the ability to optimize that technology for the human visual system.
Image processing optimizes pixel arrangements by concentrating visual detail in the frequency ranges where human eyes detect differences most effectively – primarily medium-scale patterns – while using computational techniques to interpolate and fill in fine details and broad gradients that our vision processes poorly. This strategy eliminates redundant information between neighboring pixels, since adjacent pixels often contain nearly identical data that our visual system interprets as the same anyway. The result is that pixels are no longer treated uniformly across a display, but instead are strategically designed and positioned to deliver maximum perceptual impact by matching computational resources and pixel precision to the specific sensitivity patterns of human vision, creating more efficient displays that prioritize visual information where it actually matters to our perception.
Virtual pixels, a pixel created by combining elements of adjacent pixels where one subpixel from a pixel is combined with one or more subpixels of adjacent pixels to form a pixel that only exists temporally in the sense that if the display is not updating that pixel does not exist as a distinct physical part. Dithering is another process that may use a subpixel of one pixel in combination with an adjacent pixel.
Pentile displays and (to a lesser extent) virtual pixels rely on the human visual systems’ reliance on green as a luminance channel. A general description of Pentile pixel configurations is that the displays have two sub-pixel pairs which combine red + luminance and blue + luminance where at least one of the luminance sub-pixels is green. This changes the context of the statement made earlier about “a pixel having all the information” into a pixel having all the information required for the human eye to reconstruct the intended image. A video display is not designed solely against some abstract laboratory test but rather against the ability of a human with typical visual abilities to see an image. This momentarily ignores the trials of using video displays with cameras (moire, multiplexing, sync/timing) and the need to provide data to other constituencies.
The pixel emerges within this historical framing not merely as a technical specification, but as a cultural artifact where mathematical theory, engineering pragmatism, and cultural expression converge into a unified technological object. This positions the pixel as both technically determined and culturally contingent. Alternately the pixel is a metaphor for so many projects that start life as a perfect set of points and end up entering the world through the limited physical pictures elements that are on hand.
This duality reveals the pixel’s unique epistemological status. Unlike purely mathematical constructs or purely physical objects, the pixel exists at the intersection of Shannon’s information theory, human perceptual psychology, and material engineering constraints (the name of the book I didn’t want to write and you didn’t want to read). The display pixels, the ones that get introduced in products at press events and giant global trade fairs, exist at this point of tension between commercial requirements, manufacturing constraints, product planning arcs, marketing departments insistent on misidentifying the technology, and users that care about the underlying technology to varying degrees.
Pixels may become persistent conceptual frameworks that maintain coherence across technological transitions while adapting to new material possibilities and cultural demands. As volumetric displays, light field displays, and spatial computing architectures emerge, the pixel’s role as a foundational intersection between mathematical abstraction, engineering implementation, and cultural meaning-making ensures its continued evolution. The pixel will persist not because it represents an optimal technical solution, but because it provides an essential conceptual bridge between the quantitative demands of information processing and the qualitative dimensions of human experience.

- The quality of a television is impacted by a complex arrangement of analog components assembled by humans with some tolerances interacting in negative and positive ways. The old Sony story is that it took 100,000 televisions to make 1,000 professional monitors, or something along those lines. Deflection coils (±2%), Capacitors (±10% and could be a lot worse), Ferrite Cores (±2%), Oscillators (±10% used to compensate for other things), Flyback transformer (2-5%), Resistors (±5%), Humans inconsistently soldering and making physical connections between subassemblies (±100%). ↩︎
- The whole memo is topical http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf ↩︎