Puneet Varma (Editor)

Computer graphics (computer science)

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Computer graphics (computer science)

Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

Contents

Overview

Computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.

Connected studies include:

  • Applied mathematics
  • Computational geometry
  • Computational topology
  • Computer vision
  • Image processing
  • Information visualization
  • Scientific visualization
  • Applications of computer graphics include:

  • Digital art
  • Special effects
  • Video games
  • Visual effects
  • History

    One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and hand—produced by Ed Catmull and Fred Parke at the University of Utah. Swedish inventor Håkan Lans applied for the first patent on color graphics in 1979.

    There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, and Symposium on Computer Animation. As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates).

    Subfields

    A broad classification of major subfields in computer graphics might be:

    1. Geometry: studies ways to represent and process surfaces
    2. Animation: studies with ways to represent and manipulate motion
    3. Rendering: studies algorithms to reproduce light transport
    4. Imaging: studies image acquisition or image editing
    5. Topology:studies the behaviour of spaces and surfaces.

    Geometry

    The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).

    Geometry Subfields
  • Implicit surface modeling – an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.
  • Digital geometry processing – surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.
  • Discrete differential geometry – a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.
  • Point-based graphics – a recent field which focuses on points as the fundamental representation of surfaces.
  • Subdivision surfaces
  • Out-of-core mesh processing – another recent field which focuses on mesh datasets that do not fit in main memory.
  • Animation

    The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.

    Subfields
  • Performance capture
  • Character animation
  • Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)
  • Rendering

    Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information.

    Transport

    Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

    Scattering

    Models of scattering and shading are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Shading can be broken down into two orthogonal issues, which are often studied independently:

    1. scattering – how light interacts with the surface at a given point
    2. shading – how material properties vary across the surface

    The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)

    Other subfields
  • physically based rendering – concerned with generating images according to the laws of geometric optics
  • real time rendering – focuses on rendering for interactive applications, typically using specialized hardware like GPUs
  • non-photorealistic rendering
  • relighting – recent area concerned with quickly re-rendering scenes
  • References

    Computer graphics (computer science) Wikipedia