When did hardware antialiasing start being available? The Next CEO of Stack OverflowPower cycling old hardwareWhat did the Super FX co-processor do?When did the tower form factor appear and when did it become popular?Where did Sony's 3D graphics hardware in the PS1 originate?When did Great Valley Products, stop producing hardware?If the Sega Genesis/MegaDrive could be overclocked so easily, why couldn't the SNES?When did the Macintosh start using four (or more) layer PCB's?Simplest system to create an emulator forHow does Apple ][gs hardware dithering work?When did game consoles acquire battery-backed clocks?

Why didn't Khan get resurrected in the Genesis Explosion?

% symbol leads to superlong (forever?) compilations

Anatomically Correct Mesopelagic Aves

How to make a variable always equal to the result of some calculations?

How do I go from 300 unfinished/half written blog posts, to published posts?

Why were Madagascar and New Zealand discovered so late?

Science fiction (dystopian) short story set after WWIII

How to write the block matrix in LaTex?

How to safely derail a train during transit?

How can I get through very long and very dry, but also very useful technical documents when learning a new tool?

Can a single photon have an energy density?

Why did we only see the N-1 starfighters in one film?

How do scammers retract money, while you can’t?

What makes a siege story/plot interesting?

Anatomically Correct Strange Women In Ponds Distributing Swords

Grabbing quick drinks

Describing a person. What needs to be mentioned?

Why didn't Theresa May consult with Parliament before negotiating a deal with the EU?

Is it okay to store user locations?

How should I support this large drywall patch?

How to use tikz in fbox?

Horror movie/show or scene where a horse creature opens its mouth really wide and devours a man in a stables

I believe this to be a fraud - hired, then asked to cash check and send cash as Bitcoin

How do I solve this limit?



When did hardware antialiasing start being available?



The Next CEO of Stack OverflowPower cycling old hardwareWhat did the Super FX co-processor do?When did the tower form factor appear and when did it become popular?Where did Sony's 3D graphics hardware in the PS1 originate?When did Great Valley Products, stop producing hardware?If the Sega Genesis/MegaDrive could be overclocked so easily, why couldn't the SNES?When did the Macintosh start using four (or more) layer PCB's?Simplest system to create an emulator forHow does Apple ][gs hardware dithering work?When did game consoles acquire battery-backed clocks?










17















An important step towards 3D gaming was the ability to scale sprites or tiles by nonintegral factors. Examples of the former from the eighties were the arcade games Pole Position, Outrun, Space Harrier and Afterburner; a subsequent example of the latter was the SNES Mode 7, used in many games for that machine.



Accustomed to modern hardware, one tends to expect antialiasing; that is, for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two.



But https://arstechnica.com/gaming/2011/08/accuracy-takes-power-one-mans-3ghz-quest-to-build-a-perfect-snes-emulator/ says




I don't deny the advantages of treating classic games as something that can be improved upon: N64 emulators employ stunning high-resolution texture packs and 1080p upscaling, while SNES emulators often provide 2x anti-aliasing for Mode7 graphics and cubic-spline interpolation for audio samples. Such emulated games look and sound better. While there is nothing wrong with this, it is contrary to the goal of writing a hardware-accurate emulator.




This suggests the SNES did not actually have antialiasing.



According to https://en.wikipedia.org/wiki/List_of_Sega_arcade_system_boards




The Sega Model 2 is an arcade system board released by Sega in 1993. Like the Model 1, it was developed in cooperation with Martin Marietta, and is a further advancement of the earlier Model 1 system. The most noticeable improvement was texture mapping, which enabled polygons to be painted with bitmap images, as opposed to the limited monotone flat shading that Model 1 supported. The Model 2 also introduced the use of texture filtering and texture anti-aliasing.




This suggests Sega arcade machines likewise did not have antialiasing before 1993. A surprising conclusion from today's perspective, but then, the transistors required for the extra calculations might've been a significant cost in those days, and arcade games were fast-moving and CRT TV displays were somewhat blurry anyway. And certainly it would not have been affordable in software on eighties-vintage CPUs.



Are the above inferences correct? Did antialiasing hardware only start being available in arcade and home games machines in the early to mid nineties?










share|improve this question
























  • Is there a difference between "texture filtering" and "texture antialiasing"?

    – traal
    Mar 18 at 4:03






  • 6





    for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two. This is actually bilinear filtering, not anti-aliasing. Aliasing will still appear if you do what you described. You need to filter the texture before scaling it, in order to have anti-aliasing.

    – Bregalad
    Mar 18 at 13:43







  • 3





    Also — pedantically as ever! — if you calculate a weighted average of just two pixels, that's linear filtering. Do it in two dimensions to get bilinear filtering, which is then a weighted average of four pixels.

    – Tommy
    Mar 18 at 18:14







  • 3





    Another pedantic answer is that you got hardware antialiasing once computers started outputting 320x200 to televisions: a home television doesn't have the sharpest focus in the world, so a certain amount of blur between pixels and across lines was normal, and in fact used to advantage by some software.

    – Mark
    Mar 18 at 22:29






  • 1





    I don't have an answer but I have a vague hunch that flight simulators were the first to provide antialiased 3d graphics.

    – user3570736
    yesterday















17















An important step towards 3D gaming was the ability to scale sprites or tiles by nonintegral factors. Examples of the former from the eighties were the arcade games Pole Position, Outrun, Space Harrier and Afterburner; a subsequent example of the latter was the SNES Mode 7, used in many games for that machine.



Accustomed to modern hardware, one tends to expect antialiasing; that is, for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two.



But https://arstechnica.com/gaming/2011/08/accuracy-takes-power-one-mans-3ghz-quest-to-build-a-perfect-snes-emulator/ says




I don't deny the advantages of treating classic games as something that can be improved upon: N64 emulators employ stunning high-resolution texture packs and 1080p upscaling, while SNES emulators often provide 2x anti-aliasing for Mode7 graphics and cubic-spline interpolation for audio samples. Such emulated games look and sound better. While there is nothing wrong with this, it is contrary to the goal of writing a hardware-accurate emulator.




This suggests the SNES did not actually have antialiasing.



According to https://en.wikipedia.org/wiki/List_of_Sega_arcade_system_boards




The Sega Model 2 is an arcade system board released by Sega in 1993. Like the Model 1, it was developed in cooperation with Martin Marietta, and is a further advancement of the earlier Model 1 system. The most noticeable improvement was texture mapping, which enabled polygons to be painted with bitmap images, as opposed to the limited monotone flat shading that Model 1 supported. The Model 2 also introduced the use of texture filtering and texture anti-aliasing.




This suggests Sega arcade machines likewise did not have antialiasing before 1993. A surprising conclusion from today's perspective, but then, the transistors required for the extra calculations might've been a significant cost in those days, and arcade games were fast-moving and CRT TV displays were somewhat blurry anyway. And certainly it would not have been affordable in software on eighties-vintage CPUs.



Are the above inferences correct? Did antialiasing hardware only start being available in arcade and home games machines in the early to mid nineties?










share|improve this question
























  • Is there a difference between "texture filtering" and "texture antialiasing"?

    – traal
    Mar 18 at 4:03






  • 6





    for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two. This is actually bilinear filtering, not anti-aliasing. Aliasing will still appear if you do what you described. You need to filter the texture before scaling it, in order to have anti-aliasing.

    – Bregalad
    Mar 18 at 13:43







  • 3





    Also — pedantically as ever! — if you calculate a weighted average of just two pixels, that's linear filtering. Do it in two dimensions to get bilinear filtering, which is then a weighted average of four pixels.

    – Tommy
    Mar 18 at 18:14







  • 3





    Another pedantic answer is that you got hardware antialiasing once computers started outputting 320x200 to televisions: a home television doesn't have the sharpest focus in the world, so a certain amount of blur between pixels and across lines was normal, and in fact used to advantage by some software.

    – Mark
    Mar 18 at 22:29






  • 1





    I don't have an answer but I have a vague hunch that flight simulators were the first to provide antialiased 3d graphics.

    – user3570736
    yesterday













17












17








17


1






An important step towards 3D gaming was the ability to scale sprites or tiles by nonintegral factors. Examples of the former from the eighties were the arcade games Pole Position, Outrun, Space Harrier and Afterburner; a subsequent example of the latter was the SNES Mode 7, used in many games for that machine.



Accustomed to modern hardware, one tends to expect antialiasing; that is, for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two.



But https://arstechnica.com/gaming/2011/08/accuracy-takes-power-one-mans-3ghz-quest-to-build-a-perfect-snes-emulator/ says




I don't deny the advantages of treating classic games as something that can be improved upon: N64 emulators employ stunning high-resolution texture packs and 1080p upscaling, while SNES emulators often provide 2x anti-aliasing for Mode7 graphics and cubic-spline interpolation for audio samples. Such emulated games look and sound better. While there is nothing wrong with this, it is contrary to the goal of writing a hardware-accurate emulator.




This suggests the SNES did not actually have antialiasing.



According to https://en.wikipedia.org/wiki/List_of_Sega_arcade_system_boards




The Sega Model 2 is an arcade system board released by Sega in 1993. Like the Model 1, it was developed in cooperation with Martin Marietta, and is a further advancement of the earlier Model 1 system. The most noticeable improvement was texture mapping, which enabled polygons to be painted with bitmap images, as opposed to the limited monotone flat shading that Model 1 supported. The Model 2 also introduced the use of texture filtering and texture anti-aliasing.




This suggests Sega arcade machines likewise did not have antialiasing before 1993. A surprising conclusion from today's perspective, but then, the transistors required for the extra calculations might've been a significant cost in those days, and arcade games were fast-moving and CRT TV displays were somewhat blurry anyway. And certainly it would not have been affordable in software on eighties-vintage CPUs.



Are the above inferences correct? Did antialiasing hardware only start being available in arcade and home games machines in the early to mid nineties?










share|improve this question
















An important step towards 3D gaming was the ability to scale sprites or tiles by nonintegral factors. Examples of the former from the eighties were the arcade games Pole Position, Outrun, Space Harrier and Afterburner; a subsequent example of the latter was the SNES Mode 7, used in many games for that machine.



Accustomed to modern hardware, one tends to expect antialiasing; that is, for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two.



But https://arstechnica.com/gaming/2011/08/accuracy-takes-power-one-mans-3ghz-quest-to-build-a-perfect-snes-emulator/ says




I don't deny the advantages of treating classic games as something that can be improved upon: N64 emulators employ stunning high-resolution texture packs and 1080p upscaling, while SNES emulators often provide 2x anti-aliasing for Mode7 graphics and cubic-spline interpolation for audio samples. Such emulated games look and sound better. While there is nothing wrong with this, it is contrary to the goal of writing a hardware-accurate emulator.




This suggests the SNES did not actually have antialiasing.



According to https://en.wikipedia.org/wiki/List_of_Sega_arcade_system_boards




The Sega Model 2 is an arcade system board released by Sega in 1993. Like the Model 1, it was developed in cooperation with Martin Marietta, and is a further advancement of the earlier Model 1 system. The most noticeable improvement was texture mapping, which enabled polygons to be painted with bitmap images, as opposed to the limited monotone flat shading that Model 1 supported. The Model 2 also introduced the use of texture filtering and texture anti-aliasing.




This suggests Sega arcade machines likewise did not have antialiasing before 1993. A surprising conclusion from today's perspective, but then, the transistors required for the extra calculations might've been a significant cost in those days, and arcade games were fast-moving and CRT TV displays were somewhat blurry anyway. And certainly it would not have been affordable in software on eighties-vintage CPUs.



Are the above inferences correct? Did antialiasing hardware only start being available in arcade and home games machines in the early to mid nineties?







hardware graphics snes sega






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 18 at 11:00









Raffzahn

54.2k6133219




54.2k6133219










asked Mar 18 at 2:47









rwallacerwallace

10.1k450149




10.1k450149












  • Is there a difference between "texture filtering" and "texture antialiasing"?

    – traal
    Mar 18 at 4:03






  • 6





    for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two. This is actually bilinear filtering, not anti-aliasing. Aliasing will still appear if you do what you described. You need to filter the texture before scaling it, in order to have anti-aliasing.

    – Bregalad
    Mar 18 at 13:43







  • 3





    Also — pedantically as ever! — if you calculate a weighted average of just two pixels, that's linear filtering. Do it in two dimensions to get bilinear filtering, which is then a weighted average of four pixels.

    – Tommy
    Mar 18 at 18:14







  • 3





    Another pedantic answer is that you got hardware antialiasing once computers started outputting 320x200 to televisions: a home television doesn't have the sharpest focus in the world, so a certain amount of blur between pixels and across lines was normal, and in fact used to advantage by some software.

    – Mark
    Mar 18 at 22:29






  • 1





    I don't have an answer but I have a vague hunch that flight simulators were the first to provide antialiased 3d graphics.

    – user3570736
    yesterday

















  • Is there a difference between "texture filtering" and "texture antialiasing"?

    – traal
    Mar 18 at 4:03






  • 6





    for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two. This is actually bilinear filtering, not anti-aliasing. Aliasing will still appear if you do what you described. You need to filter the texture before scaling it, in order to have anti-aliasing.

    – Bregalad
    Mar 18 at 13:43







  • 3





    Also — pedantically as ever! — if you calculate a weighted average of just two pixels, that's linear filtering. Do it in two dimensions to get bilinear filtering, which is then a weighted average of four pixels.

    – Tommy
    Mar 18 at 18:14







  • 3





    Another pedantic answer is that you got hardware antialiasing once computers started outputting 320x200 to televisions: a home television doesn't have the sharpest focus in the world, so a certain amount of blur between pixels and across lines was normal, and in fact used to advantage by some software.

    – Mark
    Mar 18 at 22:29






  • 1





    I don't have an answer but I have a vague hunch that flight simulators were the first to provide antialiased 3d graphics.

    – user3570736
    yesterday
















Is there a difference between "texture filtering" and "texture antialiasing"?

– traal
Mar 18 at 4:03





Is there a difference between "texture filtering" and "texture antialiasing"?

– traal
Mar 18 at 4:03




6




6





for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two. This is actually bilinear filtering, not anti-aliasing. Aliasing will still appear if you do what you described. You need to filter the texture before scaling it, in order to have anti-aliasing.

– Bregalad
Mar 18 at 13:43






for each screen pixel, the system locates the corresponding data pixel, and if the answer lands between two data pixels, instead of just picking one or the other, it calculates a weighted average of the two. This is actually bilinear filtering, not anti-aliasing. Aliasing will still appear if you do what you described. You need to filter the texture before scaling it, in order to have anti-aliasing.

– Bregalad
Mar 18 at 13:43





3




3





Also — pedantically as ever! — if you calculate a weighted average of just two pixels, that's linear filtering. Do it in two dimensions to get bilinear filtering, which is then a weighted average of four pixels.

– Tommy
Mar 18 at 18:14






Also — pedantically as ever! — if you calculate a weighted average of just two pixels, that's linear filtering. Do it in two dimensions to get bilinear filtering, which is then a weighted average of four pixels.

– Tommy
Mar 18 at 18:14





3




3





Another pedantic answer is that you got hardware antialiasing once computers started outputting 320x200 to televisions: a home television doesn't have the sharpest focus in the world, so a certain amount of blur between pixels and across lines was normal, and in fact used to advantage by some software.

– Mark
Mar 18 at 22:29





Another pedantic answer is that you got hardware antialiasing once computers started outputting 320x200 to televisions: a home television doesn't have the sharpest focus in the world, so a certain amount of blur between pixels and across lines was normal, and in fact used to advantage by some software.

– Mark
Mar 18 at 22:29




1




1





I don't have an answer but I have a vague hunch that flight simulators were the first to provide antialiased 3d graphics.

– user3570736
yesterday





I don't have an answer but I have a vague hunch that flight simulators were the first to provide antialiased 3d graphics.

– user3570736
yesterday










3 Answers
3






active

oldest

votes


















24














There's something of a conflation here of antialiasing and filtering, I think. Antialiasing is literally preventing things from adopting aliases — e.g. if a diagonal line looks like a staircase rather than a diagonal line, it has adopted an alias. So you can imagine the same thing happening to textures as they rotate or take awkward angles. But it's always about accurately portraying the information you have.



Conversely, bilinear filtering is just a different way of guessing at what is between the information you have. It's about generating extra information — specifically positing that there's a linear gradient between every source pixel and the next, rather than a hard edge.



That being said: no, the SNES does neither. It's a simple nearest-neighbour colour grab only. Ditto for the scaling systems that precede it — including the Lynx in the home (and anywhere else you want to take it; I suggest the battery shop) and arcade machines like Sega's.



This is true up to the Saturn and Playstation. The Nintendo 64 has bilinear filtering, and everything after that unambiguously has both*.



So I believe the sources are correct.



*) you can technically fake antialiasing on anything with subpixel precision and alpha transparency by drawing multiple passes with slightly adjusted coordinates. So an N64 could do that, it'd just be expensive.






share|improve this answer


















  • 2





    @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

    – Tommy
    Mar 18 at 11:23






  • 2





    (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

    – Tommy
    Mar 18 at 11:26












  • @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

    – Tommy
    Mar 18 at 18:11



















9














[not a complete answer, but some remarks too big for a comment]



[also it focuses on games, as they are the most complex, real time application. Antialiasing for desktop UI and editors are a fairly insensitive issue and a subset thereof]



Need for Colours



A point, often forgotten from today's view is that antialiasing does need a video system systems with fine colour tuning to smoothen out edges/transitions. So either one with



  • a fairly large number of colours, covering in-between shades and intensities, or with


  • software definable colours from a larger palette than shown.


The first will require considerable more video memory, while the second needs more sophisticated video hardware using a Colour LUT. Early systems with just a few and in addition predefined colours, like on a 99/4, 2600 or C64, will not have it.



In contrast, the Atari 400/800 could select the displayed colours form a table of (up to) 128 (*1), which dis allow some really nifty shading effects and would have made a great support for antialiasing - except multiple colours within a line where quite restricted. Which brings the next point



(Bitmap)Memory



Systems, way into the 1980s were quite memory limited. Thus graphics where often character based - up to the extreme of making bitmap in special character formats. But antialiasing does need a bitmap based memory with the ability to colour each pixel separate. Even for a simple TV resolution of 320x200 (*2) needs 64000 bytes of screen memory when using with sufficient colours. An enormous and expensive amount for early games and still a lot to be handled in time by the 8 bit CPUs used thoughout the 1980s. It was way more conceivable to apply data reduction than go toward full bitmap.



Graphic Objects



Today we think in textures to be placed somewhere as objects to be manipulated. Beside the need for (3D) surfaces to place them this again is based on a flat bitmap view. For most time of game history movable graphic objects where sprites, layered on top of a background, added during line processing. These where simple insertions into the pixel stream. No processing involved. More often than not, also limited to a single colour in form of set/not set data. Their big advantage was a fee positioning without any regards to the background, this means no interaction of any kind but simple replacement. Multiple colours where usually made by layering multiple sprites at the same position. Again without any processing but a priority encoder for layers. The whole setup worked extreme well with low memory requirements and easy handling. Not as well with colours and sizes.



That's why systems went toward Bit-Blit, once bitmap frame buffers became easy available. With bit-blit operations objects of arbitrary size can be drawn on the screen - and it can be done in all colours available. While still 2D, this is already way more like today's understanding of textures than sprites are.



And since it is based on well defined operations between frame buffer (background) and object, antialiasing operations can for the first time be performed by the bit bliting hardware.



Use-Case



Each and every technology needs a use case beyond the desk of an engineer/hacker. So even with ignoring cost, there was no real need for Antialising and likewise technology until the early 1990s. Games did quite well improve from being black and white (Space Invaders) over some colours (Galaxian in 1979 was the first colour game) and more of that (Double Dragonin 1987). Games using 'simple' hardware were that advanced, that early polygon based boards like Namcos System 21 of 1987, definitely not a lightweight system, looked like a step back to users (*3). And only such systems would benefit from antialiasing.



Similar with resolution. Back in 'the good old days' hardware was fixed definition and software tightly coupled. Game hardware was made for a fixed screen resolution (*4), usually TV like. There was no real need to up or downscale for newer screens with a different resolution. If a sprite was needed in different sizes, simply having additional copies in ROM solved it without any additional hardware.



Antialiasing became not an issue before games either had to work in different resolutions or games where based on a real 3D environment. Both became a case in the 1990 and on PCs



Conclusion



So while there happened many more details in timeline and hardware variation, it is safe to state that antialiasing as we know it today does need a certain level in memory, bitmap and colour available and how objects are handled to make it viable. A level that wasn't reached (in general) before ~1990. And like all technology, it needs not only to be enabled by engineering, but it's worthless without the need for an application.



That picture changes of course, and as noted by the question, when running old game data on modern hardware. The now standard features allow to use exactly the same advantages, like smooth scaling and blending, to adapt them to today's screens.




*1 - In fact, already the 2600 offered a quite remarkable colour capacity. But with it's low 'resolution' antialiasing doesn't make much sense.



*2 - The Atari 800 could already do 384x240 in overscan and many arcade machines did use CRTs in similar or even better Resolution. Keep in mind, the limit for colour/pixel density on TV is due the transfer encoding (NTSC, PAL, etc.), not the CRT. Arcade machines didn't had that limitation, so better resolutions where quite within the CTR specs. Resulting in even more memory.



*3 - Compare the bulky graphics of Winning Run using two 68k and 5 DSPs with the smooth textures of Double Dragon with only three 8 bit CPUs



*4 - Screen resolution, the capabilities of the intended display, usually a TV (like) screen, not graphics resolution/mode displayed on this screen resolution.






share|improve this answer
































    6














    This was in no way part of a hardware assisted 3D pipeline, but there were attempts made in PC-class hardware to achieve anti-aliasing even as early as 1990. Edsun Labs made a drop in replacement RAMDAC for VGA boards that used some of the 256 possible color values as opcodes that would enable color blending between pixels on a line. This let a nominally 8bpp VGA board draw more colors - colors that were useful specifically from the perspective of drawing smoother images.



    This article talks about the specific implementation.



    https://www.analog.com/media/en/analog-dialogue/volume-24/number-3/articles/volume24-number3.pdf#page=3



    Michael Abrash also wrote about the product and its strengths and limitations in Dr. Dobb's Journal:



    http://archive.gamedev.net/archive/reference/articles/article371.html



    At the time, the product got some press and then pretty much immediately died out. CEG was a low-cost play more than anything else, and it suffered from being poorly suited for displaying dynamic graphics. (Which were very much on the rise through both Windows and various games.)






    share|improve this answer























      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "648"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9368%2fwhen-did-hardware-antialiasing-start-being-available%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      24














      There's something of a conflation here of antialiasing and filtering, I think. Antialiasing is literally preventing things from adopting aliases — e.g. if a diagonal line looks like a staircase rather than a diagonal line, it has adopted an alias. So you can imagine the same thing happening to textures as they rotate or take awkward angles. But it's always about accurately portraying the information you have.



      Conversely, bilinear filtering is just a different way of guessing at what is between the information you have. It's about generating extra information — specifically positing that there's a linear gradient between every source pixel and the next, rather than a hard edge.



      That being said: no, the SNES does neither. It's a simple nearest-neighbour colour grab only. Ditto for the scaling systems that precede it — including the Lynx in the home (and anywhere else you want to take it; I suggest the battery shop) and arcade machines like Sega's.



      This is true up to the Saturn and Playstation. The Nintendo 64 has bilinear filtering, and everything after that unambiguously has both*.



      So I believe the sources are correct.



      *) you can technically fake antialiasing on anything with subpixel precision and alpha transparency by drawing multiple passes with slightly adjusted coordinates. So an N64 could do that, it'd just be expensive.






      share|improve this answer


















      • 2





        @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

        – Tommy
        Mar 18 at 11:23






      • 2





        (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

        – Tommy
        Mar 18 at 11:26












      • @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

        – Tommy
        Mar 18 at 18:11
















      24














      There's something of a conflation here of antialiasing and filtering, I think. Antialiasing is literally preventing things from adopting aliases — e.g. if a diagonal line looks like a staircase rather than a diagonal line, it has adopted an alias. So you can imagine the same thing happening to textures as they rotate or take awkward angles. But it's always about accurately portraying the information you have.



      Conversely, bilinear filtering is just a different way of guessing at what is between the information you have. It's about generating extra information — specifically positing that there's a linear gradient between every source pixel and the next, rather than a hard edge.



      That being said: no, the SNES does neither. It's a simple nearest-neighbour colour grab only. Ditto for the scaling systems that precede it — including the Lynx in the home (and anywhere else you want to take it; I suggest the battery shop) and arcade machines like Sega's.



      This is true up to the Saturn and Playstation. The Nintendo 64 has bilinear filtering, and everything after that unambiguously has both*.



      So I believe the sources are correct.



      *) you can technically fake antialiasing on anything with subpixel precision and alpha transparency by drawing multiple passes with slightly adjusted coordinates. So an N64 could do that, it'd just be expensive.






      share|improve this answer


















      • 2





        @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

        – Tommy
        Mar 18 at 11:23






      • 2





        (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

        – Tommy
        Mar 18 at 11:26












      • @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

        – Tommy
        Mar 18 at 18:11














      24












      24








      24







      There's something of a conflation here of antialiasing and filtering, I think. Antialiasing is literally preventing things from adopting aliases — e.g. if a diagonal line looks like a staircase rather than a diagonal line, it has adopted an alias. So you can imagine the same thing happening to textures as they rotate or take awkward angles. But it's always about accurately portraying the information you have.



      Conversely, bilinear filtering is just a different way of guessing at what is between the information you have. It's about generating extra information — specifically positing that there's a linear gradient between every source pixel and the next, rather than a hard edge.



      That being said: no, the SNES does neither. It's a simple nearest-neighbour colour grab only. Ditto for the scaling systems that precede it — including the Lynx in the home (and anywhere else you want to take it; I suggest the battery shop) and arcade machines like Sega's.



      This is true up to the Saturn and Playstation. The Nintendo 64 has bilinear filtering, and everything after that unambiguously has both*.



      So I believe the sources are correct.



      *) you can technically fake antialiasing on anything with subpixel precision and alpha transparency by drawing multiple passes with slightly adjusted coordinates. So an N64 could do that, it'd just be expensive.






      share|improve this answer













      There's something of a conflation here of antialiasing and filtering, I think. Antialiasing is literally preventing things from adopting aliases — e.g. if a diagonal line looks like a staircase rather than a diagonal line, it has adopted an alias. So you can imagine the same thing happening to textures as they rotate or take awkward angles. But it's always about accurately portraying the information you have.



      Conversely, bilinear filtering is just a different way of guessing at what is between the information you have. It's about generating extra information — specifically positing that there's a linear gradient between every source pixel and the next, rather than a hard edge.



      That being said: no, the SNES does neither. It's a simple nearest-neighbour colour grab only. Ditto for the scaling systems that precede it — including the Lynx in the home (and anywhere else you want to take it; I suggest the battery shop) and arcade machines like Sega's.



      This is true up to the Saturn and Playstation. The Nintendo 64 has bilinear filtering, and everything after that unambiguously has both*.



      So I believe the sources are correct.



      *) you can technically fake antialiasing on anything with subpixel precision and alpha transparency by drawing multiple passes with slightly adjusted coordinates. So an N64 could do that, it'd just be expensive.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Mar 18 at 3:05









      TommyTommy

      15.6k14476




      15.6k14476







      • 2





        @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

        – Tommy
        Mar 18 at 11:23






      • 2





        (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

        – Tommy
        Mar 18 at 11:26












      • @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

        – Tommy
        Mar 18 at 18:11













      • 2





        @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

        – Tommy
        Mar 18 at 11:23






      • 2





        (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

        – Tommy
        Mar 18 at 11:26












      • @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

        – Tommy
        Mar 18 at 18:11








      2




      2





      @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

      – Tommy
      Mar 18 at 11:23





      @RossRidge to be slightly contrary, bilinear filtering makes a difference only when textures are larger than they should be. Mipmapping attempts to eliminate shimmering on distant polygons. Though trilinear filtering is blending between bilinear filtering of two mipmap levels.

      – Tommy
      Mar 18 at 11:23




      2




      2





      (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

      – Tommy
      Mar 18 at 11:26






      (Mipmapping = providing additional scaled down copies of a texture ahead of time and picking one based on the output density when you draw, so that you're not trying to scale down very much in real time. So you can do an expensive down scaling and then just look it up. Usually a box filter or a Gaussian(-esque) low-pass filter.)

      – Tommy
      Mar 18 at 11:26














      @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

      – Tommy
      Mar 18 at 18:11






      @RossRidge your comment is gone, so it's hard to discover my error. Such that it may help anybody else, my interpretation at the time was that you suggested that bilinear filtering helped to eliminate shimmer. Clearly an erroneous interpretation based on your new comment. But I don't think my responses, however misplaced, have subtracted value for other StackOverflow users, so that's something.

      – Tommy
      Mar 18 at 18:11












      9














      [not a complete answer, but some remarks too big for a comment]



      [also it focuses on games, as they are the most complex, real time application. Antialiasing for desktop UI and editors are a fairly insensitive issue and a subset thereof]



      Need for Colours



      A point, often forgotten from today's view is that antialiasing does need a video system systems with fine colour tuning to smoothen out edges/transitions. So either one with



      • a fairly large number of colours, covering in-between shades and intensities, or with


      • software definable colours from a larger palette than shown.


      The first will require considerable more video memory, while the second needs more sophisticated video hardware using a Colour LUT. Early systems with just a few and in addition predefined colours, like on a 99/4, 2600 or C64, will not have it.



      In contrast, the Atari 400/800 could select the displayed colours form a table of (up to) 128 (*1), which dis allow some really nifty shading effects and would have made a great support for antialiasing - except multiple colours within a line where quite restricted. Which brings the next point



      (Bitmap)Memory



      Systems, way into the 1980s were quite memory limited. Thus graphics where often character based - up to the extreme of making bitmap in special character formats. But antialiasing does need a bitmap based memory with the ability to colour each pixel separate. Even for a simple TV resolution of 320x200 (*2) needs 64000 bytes of screen memory when using with sufficient colours. An enormous and expensive amount for early games and still a lot to be handled in time by the 8 bit CPUs used thoughout the 1980s. It was way more conceivable to apply data reduction than go toward full bitmap.



      Graphic Objects



      Today we think in textures to be placed somewhere as objects to be manipulated. Beside the need for (3D) surfaces to place them this again is based on a flat bitmap view. For most time of game history movable graphic objects where sprites, layered on top of a background, added during line processing. These where simple insertions into the pixel stream. No processing involved. More often than not, also limited to a single colour in form of set/not set data. Their big advantage was a fee positioning without any regards to the background, this means no interaction of any kind but simple replacement. Multiple colours where usually made by layering multiple sprites at the same position. Again without any processing but a priority encoder for layers. The whole setup worked extreme well with low memory requirements and easy handling. Not as well with colours and sizes.



      That's why systems went toward Bit-Blit, once bitmap frame buffers became easy available. With bit-blit operations objects of arbitrary size can be drawn on the screen - and it can be done in all colours available. While still 2D, this is already way more like today's understanding of textures than sprites are.



      And since it is based on well defined operations between frame buffer (background) and object, antialiasing operations can for the first time be performed by the bit bliting hardware.



      Use-Case



      Each and every technology needs a use case beyond the desk of an engineer/hacker. So even with ignoring cost, there was no real need for Antialising and likewise technology until the early 1990s. Games did quite well improve from being black and white (Space Invaders) over some colours (Galaxian in 1979 was the first colour game) and more of that (Double Dragonin 1987). Games using 'simple' hardware were that advanced, that early polygon based boards like Namcos System 21 of 1987, definitely not a lightweight system, looked like a step back to users (*3). And only such systems would benefit from antialiasing.



      Similar with resolution. Back in 'the good old days' hardware was fixed definition and software tightly coupled. Game hardware was made for a fixed screen resolution (*4), usually TV like. There was no real need to up or downscale for newer screens with a different resolution. If a sprite was needed in different sizes, simply having additional copies in ROM solved it without any additional hardware.



      Antialiasing became not an issue before games either had to work in different resolutions or games where based on a real 3D environment. Both became a case in the 1990 and on PCs



      Conclusion



      So while there happened many more details in timeline and hardware variation, it is safe to state that antialiasing as we know it today does need a certain level in memory, bitmap and colour available and how objects are handled to make it viable. A level that wasn't reached (in general) before ~1990. And like all technology, it needs not only to be enabled by engineering, but it's worthless without the need for an application.



      That picture changes of course, and as noted by the question, when running old game data on modern hardware. The now standard features allow to use exactly the same advantages, like smooth scaling and blending, to adapt them to today's screens.




      *1 - In fact, already the 2600 offered a quite remarkable colour capacity. But with it's low 'resolution' antialiasing doesn't make much sense.



      *2 - The Atari 800 could already do 384x240 in overscan and many arcade machines did use CRTs in similar or even better Resolution. Keep in mind, the limit for colour/pixel density on TV is due the transfer encoding (NTSC, PAL, etc.), not the CRT. Arcade machines didn't had that limitation, so better resolutions where quite within the CTR specs. Resulting in even more memory.



      *3 - Compare the bulky graphics of Winning Run using two 68k and 5 DSPs with the smooth textures of Double Dragon with only three 8 bit CPUs



      *4 - Screen resolution, the capabilities of the intended display, usually a TV (like) screen, not graphics resolution/mode displayed on this screen resolution.






      share|improve this answer





























        9














        [not a complete answer, but some remarks too big for a comment]



        [also it focuses on games, as they are the most complex, real time application. Antialiasing for desktop UI and editors are a fairly insensitive issue and a subset thereof]



        Need for Colours



        A point, often forgotten from today's view is that antialiasing does need a video system systems with fine colour tuning to smoothen out edges/transitions. So either one with



        • a fairly large number of colours, covering in-between shades and intensities, or with


        • software definable colours from a larger palette than shown.


        The first will require considerable more video memory, while the second needs more sophisticated video hardware using a Colour LUT. Early systems with just a few and in addition predefined colours, like on a 99/4, 2600 or C64, will not have it.



        In contrast, the Atari 400/800 could select the displayed colours form a table of (up to) 128 (*1), which dis allow some really nifty shading effects and would have made a great support for antialiasing - except multiple colours within a line where quite restricted. Which brings the next point



        (Bitmap)Memory



        Systems, way into the 1980s were quite memory limited. Thus graphics where often character based - up to the extreme of making bitmap in special character formats. But antialiasing does need a bitmap based memory with the ability to colour each pixel separate. Even for a simple TV resolution of 320x200 (*2) needs 64000 bytes of screen memory when using with sufficient colours. An enormous and expensive amount for early games and still a lot to be handled in time by the 8 bit CPUs used thoughout the 1980s. It was way more conceivable to apply data reduction than go toward full bitmap.



        Graphic Objects



        Today we think in textures to be placed somewhere as objects to be manipulated. Beside the need for (3D) surfaces to place them this again is based on a flat bitmap view. For most time of game history movable graphic objects where sprites, layered on top of a background, added during line processing. These where simple insertions into the pixel stream. No processing involved. More often than not, also limited to a single colour in form of set/not set data. Their big advantage was a fee positioning without any regards to the background, this means no interaction of any kind but simple replacement. Multiple colours where usually made by layering multiple sprites at the same position. Again without any processing but a priority encoder for layers. The whole setup worked extreme well with low memory requirements and easy handling. Not as well with colours and sizes.



        That's why systems went toward Bit-Blit, once bitmap frame buffers became easy available. With bit-blit operations objects of arbitrary size can be drawn on the screen - and it can be done in all colours available. While still 2D, this is already way more like today's understanding of textures than sprites are.



        And since it is based on well defined operations between frame buffer (background) and object, antialiasing operations can for the first time be performed by the bit bliting hardware.



        Use-Case



        Each and every technology needs a use case beyond the desk of an engineer/hacker. So even with ignoring cost, there was no real need for Antialising and likewise technology until the early 1990s. Games did quite well improve from being black and white (Space Invaders) over some colours (Galaxian in 1979 was the first colour game) and more of that (Double Dragonin 1987). Games using 'simple' hardware were that advanced, that early polygon based boards like Namcos System 21 of 1987, definitely not a lightweight system, looked like a step back to users (*3). And only such systems would benefit from antialiasing.



        Similar with resolution. Back in 'the good old days' hardware was fixed definition and software tightly coupled. Game hardware was made for a fixed screen resolution (*4), usually TV like. There was no real need to up or downscale for newer screens with a different resolution. If a sprite was needed in different sizes, simply having additional copies in ROM solved it without any additional hardware.



        Antialiasing became not an issue before games either had to work in different resolutions or games where based on a real 3D environment. Both became a case in the 1990 and on PCs



        Conclusion



        So while there happened many more details in timeline and hardware variation, it is safe to state that antialiasing as we know it today does need a certain level in memory, bitmap and colour available and how objects are handled to make it viable. A level that wasn't reached (in general) before ~1990. And like all technology, it needs not only to be enabled by engineering, but it's worthless without the need for an application.



        That picture changes of course, and as noted by the question, when running old game data on modern hardware. The now standard features allow to use exactly the same advantages, like smooth scaling and blending, to adapt them to today's screens.




        *1 - In fact, already the 2600 offered a quite remarkable colour capacity. But with it's low 'resolution' antialiasing doesn't make much sense.



        *2 - The Atari 800 could already do 384x240 in overscan and many arcade machines did use CRTs in similar or even better Resolution. Keep in mind, the limit for colour/pixel density on TV is due the transfer encoding (NTSC, PAL, etc.), not the CRT. Arcade machines didn't had that limitation, so better resolutions where quite within the CTR specs. Resulting in even more memory.



        *3 - Compare the bulky graphics of Winning Run using two 68k and 5 DSPs with the smooth textures of Double Dragon with only three 8 bit CPUs



        *4 - Screen resolution, the capabilities of the intended display, usually a TV (like) screen, not graphics resolution/mode displayed on this screen resolution.






        share|improve this answer



























          9












          9








          9







          [not a complete answer, but some remarks too big for a comment]



          [also it focuses on games, as they are the most complex, real time application. Antialiasing for desktop UI and editors are a fairly insensitive issue and a subset thereof]



          Need for Colours



          A point, often forgotten from today's view is that antialiasing does need a video system systems with fine colour tuning to smoothen out edges/transitions. So either one with



          • a fairly large number of colours, covering in-between shades and intensities, or with


          • software definable colours from a larger palette than shown.


          The first will require considerable more video memory, while the second needs more sophisticated video hardware using a Colour LUT. Early systems with just a few and in addition predefined colours, like on a 99/4, 2600 or C64, will not have it.



          In contrast, the Atari 400/800 could select the displayed colours form a table of (up to) 128 (*1), which dis allow some really nifty shading effects and would have made a great support for antialiasing - except multiple colours within a line where quite restricted. Which brings the next point



          (Bitmap)Memory



          Systems, way into the 1980s were quite memory limited. Thus graphics where often character based - up to the extreme of making bitmap in special character formats. But antialiasing does need a bitmap based memory with the ability to colour each pixel separate. Even for a simple TV resolution of 320x200 (*2) needs 64000 bytes of screen memory when using with sufficient colours. An enormous and expensive amount for early games and still a lot to be handled in time by the 8 bit CPUs used thoughout the 1980s. It was way more conceivable to apply data reduction than go toward full bitmap.



          Graphic Objects



          Today we think in textures to be placed somewhere as objects to be manipulated. Beside the need for (3D) surfaces to place them this again is based on a flat bitmap view. For most time of game history movable graphic objects where sprites, layered on top of a background, added during line processing. These where simple insertions into the pixel stream. No processing involved. More often than not, also limited to a single colour in form of set/not set data. Their big advantage was a fee positioning without any regards to the background, this means no interaction of any kind but simple replacement. Multiple colours where usually made by layering multiple sprites at the same position. Again without any processing but a priority encoder for layers. The whole setup worked extreme well with low memory requirements and easy handling. Not as well with colours and sizes.



          That's why systems went toward Bit-Blit, once bitmap frame buffers became easy available. With bit-blit operations objects of arbitrary size can be drawn on the screen - and it can be done in all colours available. While still 2D, this is already way more like today's understanding of textures than sprites are.



          And since it is based on well defined operations between frame buffer (background) and object, antialiasing operations can for the first time be performed by the bit bliting hardware.



          Use-Case



          Each and every technology needs a use case beyond the desk of an engineer/hacker. So even with ignoring cost, there was no real need for Antialising and likewise technology until the early 1990s. Games did quite well improve from being black and white (Space Invaders) over some colours (Galaxian in 1979 was the first colour game) and more of that (Double Dragonin 1987). Games using 'simple' hardware were that advanced, that early polygon based boards like Namcos System 21 of 1987, definitely not a lightweight system, looked like a step back to users (*3). And only such systems would benefit from antialiasing.



          Similar with resolution. Back in 'the good old days' hardware was fixed definition and software tightly coupled. Game hardware was made for a fixed screen resolution (*4), usually TV like. There was no real need to up or downscale for newer screens with a different resolution. If a sprite was needed in different sizes, simply having additional copies in ROM solved it without any additional hardware.



          Antialiasing became not an issue before games either had to work in different resolutions or games where based on a real 3D environment. Both became a case in the 1990 and on PCs



          Conclusion



          So while there happened many more details in timeline and hardware variation, it is safe to state that antialiasing as we know it today does need a certain level in memory, bitmap and colour available and how objects are handled to make it viable. A level that wasn't reached (in general) before ~1990. And like all technology, it needs not only to be enabled by engineering, but it's worthless without the need for an application.



          That picture changes of course, and as noted by the question, when running old game data on modern hardware. The now standard features allow to use exactly the same advantages, like smooth scaling and blending, to adapt them to today's screens.




          *1 - In fact, already the 2600 offered a quite remarkable colour capacity. But with it's low 'resolution' antialiasing doesn't make much sense.



          *2 - The Atari 800 could already do 384x240 in overscan and many arcade machines did use CRTs in similar or even better Resolution. Keep in mind, the limit for colour/pixel density on TV is due the transfer encoding (NTSC, PAL, etc.), not the CRT. Arcade machines didn't had that limitation, so better resolutions where quite within the CTR specs. Resulting in even more memory.



          *3 - Compare the bulky graphics of Winning Run using two 68k and 5 DSPs with the smooth textures of Double Dragon with only three 8 bit CPUs



          *4 - Screen resolution, the capabilities of the intended display, usually a TV (like) screen, not graphics resolution/mode displayed on this screen resolution.






          share|improve this answer















          [not a complete answer, but some remarks too big for a comment]



          [also it focuses on games, as they are the most complex, real time application. Antialiasing for desktop UI and editors are a fairly insensitive issue and a subset thereof]



          Need for Colours



          A point, often forgotten from today's view is that antialiasing does need a video system systems with fine colour tuning to smoothen out edges/transitions. So either one with



          • a fairly large number of colours, covering in-between shades and intensities, or with


          • software definable colours from a larger palette than shown.


          The first will require considerable more video memory, while the second needs more sophisticated video hardware using a Colour LUT. Early systems with just a few and in addition predefined colours, like on a 99/4, 2600 or C64, will not have it.



          In contrast, the Atari 400/800 could select the displayed colours form a table of (up to) 128 (*1), which dis allow some really nifty shading effects and would have made a great support for antialiasing - except multiple colours within a line where quite restricted. Which brings the next point



          (Bitmap)Memory



          Systems, way into the 1980s were quite memory limited. Thus graphics where often character based - up to the extreme of making bitmap in special character formats. But antialiasing does need a bitmap based memory with the ability to colour each pixel separate. Even for a simple TV resolution of 320x200 (*2) needs 64000 bytes of screen memory when using with sufficient colours. An enormous and expensive amount for early games and still a lot to be handled in time by the 8 bit CPUs used thoughout the 1980s. It was way more conceivable to apply data reduction than go toward full bitmap.



          Graphic Objects



          Today we think in textures to be placed somewhere as objects to be manipulated. Beside the need for (3D) surfaces to place them this again is based on a flat bitmap view. For most time of game history movable graphic objects where sprites, layered on top of a background, added during line processing. These where simple insertions into the pixel stream. No processing involved. More often than not, also limited to a single colour in form of set/not set data. Their big advantage was a fee positioning without any regards to the background, this means no interaction of any kind but simple replacement. Multiple colours where usually made by layering multiple sprites at the same position. Again without any processing but a priority encoder for layers. The whole setup worked extreme well with low memory requirements and easy handling. Not as well with colours and sizes.



          That's why systems went toward Bit-Blit, once bitmap frame buffers became easy available. With bit-blit operations objects of arbitrary size can be drawn on the screen - and it can be done in all colours available. While still 2D, this is already way more like today's understanding of textures than sprites are.



          And since it is based on well defined operations between frame buffer (background) and object, antialiasing operations can for the first time be performed by the bit bliting hardware.



          Use-Case



          Each and every technology needs a use case beyond the desk of an engineer/hacker. So even with ignoring cost, there was no real need for Antialising and likewise technology until the early 1990s. Games did quite well improve from being black and white (Space Invaders) over some colours (Galaxian in 1979 was the first colour game) and more of that (Double Dragonin 1987). Games using 'simple' hardware were that advanced, that early polygon based boards like Namcos System 21 of 1987, definitely not a lightweight system, looked like a step back to users (*3). And only such systems would benefit from antialiasing.



          Similar with resolution. Back in 'the good old days' hardware was fixed definition and software tightly coupled. Game hardware was made for a fixed screen resolution (*4), usually TV like. There was no real need to up or downscale for newer screens with a different resolution. If a sprite was needed in different sizes, simply having additional copies in ROM solved it without any additional hardware.



          Antialiasing became not an issue before games either had to work in different resolutions or games where based on a real 3D environment. Both became a case in the 1990 and on PCs



          Conclusion



          So while there happened many more details in timeline and hardware variation, it is safe to state that antialiasing as we know it today does need a certain level in memory, bitmap and colour available and how objects are handled to make it viable. A level that wasn't reached (in general) before ~1990. And like all technology, it needs not only to be enabled by engineering, but it's worthless without the need for an application.



          That picture changes of course, and as noted by the question, when running old game data on modern hardware. The now standard features allow to use exactly the same advantages, like smooth scaling and blending, to adapt them to today's screens.




          *1 - In fact, already the 2600 offered a quite remarkable colour capacity. But with it's low 'resolution' antialiasing doesn't make much sense.



          *2 - The Atari 800 could already do 384x240 in overscan and many arcade machines did use CRTs in similar or even better Resolution. Keep in mind, the limit for colour/pixel density on TV is due the transfer encoding (NTSC, PAL, etc.), not the CRT. Arcade machines didn't had that limitation, so better resolutions where quite within the CTR specs. Resulting in even more memory.



          *3 - Compare the bulky graphics of Winning Run using two 68k and 5 DSPs with the smooth textures of Double Dragon with only three 8 bit CPUs



          *4 - Screen resolution, the capabilities of the intended display, usually a TV (like) screen, not graphics resolution/mode displayed on this screen resolution.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 18 at 15:21









          manassehkatz

          3,032623




          3,032623










          answered Mar 18 at 13:05









          RaffzahnRaffzahn

          54.2k6133219




          54.2k6133219





















              6














              This was in no way part of a hardware assisted 3D pipeline, but there were attempts made in PC-class hardware to achieve anti-aliasing even as early as 1990. Edsun Labs made a drop in replacement RAMDAC for VGA boards that used some of the 256 possible color values as opcodes that would enable color blending between pixels on a line. This let a nominally 8bpp VGA board draw more colors - colors that were useful specifically from the perspective of drawing smoother images.



              This article talks about the specific implementation.



              https://www.analog.com/media/en/analog-dialogue/volume-24/number-3/articles/volume24-number3.pdf#page=3



              Michael Abrash also wrote about the product and its strengths and limitations in Dr. Dobb's Journal:



              http://archive.gamedev.net/archive/reference/articles/article371.html



              At the time, the product got some press and then pretty much immediately died out. CEG was a low-cost play more than anything else, and it suffered from being poorly suited for displaying dynamic graphics. (Which were very much on the rise through both Windows and various games.)






              share|improve this answer



























                6














                This was in no way part of a hardware assisted 3D pipeline, but there were attempts made in PC-class hardware to achieve anti-aliasing even as early as 1990. Edsun Labs made a drop in replacement RAMDAC for VGA boards that used some of the 256 possible color values as opcodes that would enable color blending between pixels on a line. This let a nominally 8bpp VGA board draw more colors - colors that were useful specifically from the perspective of drawing smoother images.



                This article talks about the specific implementation.



                https://www.analog.com/media/en/analog-dialogue/volume-24/number-3/articles/volume24-number3.pdf#page=3



                Michael Abrash also wrote about the product and its strengths and limitations in Dr. Dobb's Journal:



                http://archive.gamedev.net/archive/reference/articles/article371.html



                At the time, the product got some press and then pretty much immediately died out. CEG was a low-cost play more than anything else, and it suffered from being poorly suited for displaying dynamic graphics. (Which were very much on the rise through both Windows and various games.)






                share|improve this answer

























                  6












                  6








                  6







                  This was in no way part of a hardware assisted 3D pipeline, but there were attempts made in PC-class hardware to achieve anti-aliasing even as early as 1990. Edsun Labs made a drop in replacement RAMDAC for VGA boards that used some of the 256 possible color values as opcodes that would enable color blending between pixels on a line. This let a nominally 8bpp VGA board draw more colors - colors that were useful specifically from the perspective of drawing smoother images.



                  This article talks about the specific implementation.



                  https://www.analog.com/media/en/analog-dialogue/volume-24/number-3/articles/volume24-number3.pdf#page=3



                  Michael Abrash also wrote about the product and its strengths and limitations in Dr. Dobb's Journal:



                  http://archive.gamedev.net/archive/reference/articles/article371.html



                  At the time, the product got some press and then pretty much immediately died out. CEG was a low-cost play more than anything else, and it suffered from being poorly suited for displaying dynamic graphics. (Which were very much on the rise through both Windows and various games.)






                  share|improve this answer













                  This was in no way part of a hardware assisted 3D pipeline, but there were attempts made in PC-class hardware to achieve anti-aliasing even as early as 1990. Edsun Labs made a drop in replacement RAMDAC for VGA boards that used some of the 256 possible color values as opcodes that would enable color blending between pixels on a line. This let a nominally 8bpp VGA board draw more colors - colors that were useful specifically from the perspective of drawing smoother images.



                  This article talks about the specific implementation.



                  https://www.analog.com/media/en/analog-dialogue/volume-24/number-3/articles/volume24-number3.pdf#page=3



                  Michael Abrash also wrote about the product and its strengths and limitations in Dr. Dobb's Journal:



                  http://archive.gamedev.net/archive/reference/articles/article371.html



                  At the time, the product got some press and then pretty much immediately died out. CEG was a low-cost play more than anything else, and it suffered from being poorly suited for displaying dynamic graphics. (Which were very much on the rise through both Windows and various games.)







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Mar 18 at 13:07









                  mschaefmschaef

                  2,321714




                  2,321714



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Retrocomputing Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9368%2fwhen-did-hardware-antialiasing-start-being-available%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How should I support this large drywall patch? Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?How do I cover large gaps in drywall?How do I keep drywall around a patch from crumbling?Can I glue a second layer of drywall?How to patch long strip on drywall?Large drywall patch: how to avoid bulging seams?Drywall Mesh Patch vs. Bulge? To remove or not to remove?How to fix this drywall job?Prep drywall before backsplashWhat's the best way to fix this horrible drywall patch job?Drywall patching using 3M Patch Plus Primer

                      random experiment with two different functions on unit interval Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Random variable and probability space notionsRandom Walk with EdgesFinding functions where the increase over a random interval is Poisson distributedNumber of days until dayCan an observed event in fact be of zero probability?Unit random processmodels of coins and uniform distributionHow to get the number of successes given $n$ trials , probability $P$ and a random variable $X$Absorbing Markov chain in a computer. Is “almost every” turned into always convergence in computer executions?Stopped random walk is not uniformly integrable

                      Lowndes Grove History Architecture References Navigation menu32°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661132°48′6″N 79°57′58″W / 32.80167°N 79.96611°W / 32.80167; -79.9661178002500"National Register Information System"Historic houses of South Carolina"Lowndes Grove""+32° 48' 6.00", −79° 57' 58.00""Lowndes Grove, Charleston County (260 St. Margaret St., Charleston)""Lowndes Grove"The Charleston ExpositionIt Happened in South Carolina"Lowndes Grove (House), Saint Margaret Street & Sixth Avenue, Charleston, Charleston County, SC(Photographs)"Plantations of the Carolina Low Countrye