Showing posts with label Computational Creativity. Show all posts
Showing posts with label Computational Creativity. Show all posts

Artificial Intelligence - Algorithmic Composition And Generative Music.


A composer's approach for producing new musical material by following a preset limited set of rules or procedures is known as algorithmic composition.

In place of normal musical notation, the algorithm might instead be a set of instructions defined by the composer for the performer to follow throughout a performance. 

According to one school of thinking, algorithmic composition should include as little human intervention as possible.

In music, AI systems based on generative grammar, knowledge-based systems, genetic algorithms, and, more recently, deep learning-trained artificial neural networks have all been used.

The employment of algorithms to assist in the development of music is far from novel.

Several thousand-year-old music theory treatises provide early examples.

These treatises compiled lists of common-practice rules and conventions that composers followed in order to write music correctly.

Johann Joseph Fux's Gradus ad Parnassum (1725), which describes the precise rules defining species counter point, is an early example of algorithmic composition.

Species counterpoint presented five techniques of composing complimentary musical harmony lines against the primary or fixed melody, which was meant as an instructional tool.

Fux's technique gives limited flexibility from the specified rules if followed to the letter.

Chance was often used in early instances of algorithmic composition with little human intervention.

Chance music, often known as aleatoric music, dates back to the Renaissance.

Mozart is credited with the most renowned early example of the technique.

The usage of "Musikalisches Würfelspiel" (musical dice game) is included in a published manuscript claimed to Mozart dated 1787.

In order to put together a 16-bar waltz, the performer must roll the dice to choose one-bar parts of precomposed music (out of a possible 176) at random.

John Cage, an American composer, took these early aleatoric approaches to a new level by composing a work in which the bulk of the composition was determined by chance.

In the musical dice game, chance is only allowed to affect the sequence of brief pre-composed musical snippets, but in his 1951 work Music of Changes, chance is allowed to govern almost all choices.

To decide all musical judgments, Cage consulted the ancient Chinese divi nation scripture I Ching (The Book of Changes).

For playability considerations, his friend David Tudor, the work's performer, had to convert his highly explicit and intricate score into something closer to conventional notation.

This demo shows two types of aleatoric music: one in which the composer uses random processes to generate a set score, and the other in which the sequence of the musical pieces is left to the performer or chance.

Arnold Schoenberg created a twelve-tone algorithmic composition process that is closely related to fields of mathematics like combinatorics and group theory.

Twelve-tone composition is an early form of serialism in which each of the twelve tones of traditional western music is given equal weight.

After placing each tone in a chosen row with no repeated pitches, the row is rotated by one at a time until a 12 12 matrix is formed.

The matrix contains all variants of the original tone row that the composer may use for pitch material.

A fresh row may be employed once the aggregate—that is, all of the pitches from one row—has been included into the score.

Instead of writing melodic lines, the rows may be further separated into subsets to provide harmonic content (a vertical collection of sounds) (horizontal setting).

Later composers like Pierre Boulez and Karlheinz Stockhausen experimented with serializing additional musical aspects by building matrices that included dynamics and timbre.

Some algorithmic composing approaches were created in response to serialist composers' rejection or modification of previous techniques.

Serialist composers, according to Iannis Xena kis, were excessively concentrated on harmony as a succession of interconnecting linear objects (the establishment of linear tone-rows), and the procedures grew too difficult for the listener to understand.

He presented new ways to adapt nonmusical algorithms for music creation that might work with dense sound masses.

The strategy, according to Xenakis, liberated music from its linear concerns.

He was motivated by scientific studies of natural and social events such as moving particles in a cloud or thousands of people assembled at a political rally, and he focused his compositions on the application of probability theory and stochastic processes.

Xenakis, for example, used Markov chains to manipulate musical elements like pitch, timbre, and dynamics to gradually build thick-textured sound masses over time.

The likelihood of the next happening event is largely influenced by previous occurrences in a Markov chain; hence, his use of algorithms mixed indeterminate aspects like those in Cage's chance music with deterministic elements like serialism.

This song was dubbed stochastic music by him.

It prompted a new generation of composers to incorporate more complicated algorithms into their work.

Calculations for these composers ultimately necessitated the use of computers.

Xenakis was a forerunner in the use of computers in music, using them to assist in the calculation of the outcomes of his stochastic and probabilistic procedures.

With his album Ambient 1: Music for Airports, Brian Eno popularized ambient music by building on composer Erik Satie's notion of background music involving live performers (known as furniture music) (1978).

The lengths of seven tape recorders, each of which held a distinct pitch, were all different.

With each loop, the pitches were in a new sequence, creating a melody that was always shifting.

The composition always develops in the same manner each time it is performed since the inputs are the same.

Eno invented the phrase "generative music" in 1995 to describe systems that generate constantly changing music by adjusting parameters over time.

Ambient and generative music are both forerunners of autonomous computer-based algorithmic creation, most of which now uses artificial intelligence techniques.

Noam Chomsky and his collaborators invented generative grammar, which is a set of principles for describing natural languages.

The rules define a range of potential serial orderings of items by rewriting hierarchically structured elements.

Generative grammars, which have been adapted for algorithmic composition, may be used to generate musical sections.

Experiments in Musical Intelligence (1996) by David Cope is possibly the best-known use of generative grammar.

Cope taught his program to produce music in the styles of a variety of composers, including Bach, Mozart, and Chopin.

Information about the genre of music the composer desires to replicate is encoded as a database of facts that may be used to develop an artificial expert to aid the composer in knowledge-based systems.

Genetic algorithms are a kind of composition that mimics the process of biological evolution.

The similarity of a population of randomly made compositions to the intended musical output is examined.

Then, based on natural causes, artificial methods are applied to improve the likelihood of musically attractive qualities increasing in following generations.

The composer interacts with the system, stimulating new ideas in both the computer and the spectator.

Deep learning systems like generative adversarial networks, or GANs, are used in more contemporary AI-generated composition methodologies.

In music, generative adversarial networks pit a generator—which makes new music based on compositional style knowledge—against a discriminator, which tries to tell the difference between the generator's output and that of a human composer.

When the generator fails, the discriminator gets more information until it can no longer distinguish between genuine and created musical content.

Music is rapidly being driven in new and fascinating ways by the repurposing of non-musical algorithms for musical purposes.

Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Computational Creativity.

Further Reading:


Cope, David. 1996. Experiments in Musical Intelligence. Madison, WI: A-R Editions.

Eigenfeldt, Arne. 2011. “Towards a Generative Electronica: A Progress Report.” eContact! 14, no. 4: n.p.

Eno, Brian. 1996. “Evolving Metaphors, in My Opinion, Is What Artists Do.” In Motion Magazine, June 8, 1996.

Nierhaus, Gerhard. 2009. Algorithmic Composition: Paradigms of Automated Music Generation. New York: Springer.

Parviainen, Tero. “How Generative Music Works: A Perspective.”

Artificial Intelligence - Generative Design.


Any iterative rule-based technique used to develop several choices that fulfill a stated set of objectives and constraints is referred to as generative design.

The end result of such a process may be anything from complicated architectural models to works of art, and it could be used in a number of industries, including architecture, art, engineering, and product design, to mention a few.

A more conventional design technique involves evaluating a very small number of possibilities before selecting one to develop into a finished product.

The justification for utilizing a generative design framework is that the end aim of a project is not always known at the start.

As a result, the goal should not be to come up with a single proper solution to an issue, but rather to come up with a variety of feasible choices that all meet the requirements.

Using a computer's processing capacity, multiple variations of a solution may be quickly created and analyzed, much more quickly than a person could.

As the designer/aims user's and overall vision become clearer, the input parameters are fine-tuned to refine the solution space.

This avoids the problem of being locked into a single solution too early in the design phase, allowing for creative exploration of a broad variety of possibilities.

The expectation is that by doing so, the odds of achieving a result that best meets the defined design requirements will increase.

It's worth noting that generative design doesn't have to be a digital process; an iterative approach might be created in a physical environment.

However, since a computer's processing capacity (i.e., the quantity and speed of calculations) greatly exceeds that of a person, generative design approaches are often equated with digital techniques.

The creative process is being aided by digital technologies, particularly artificial intelligence-based solutions.

Generative art and computational design in architecture are two examples of artificial intelligence applications.

The term "generative art," often known as "computer art," refers to artwork created in part with the help of a self-contained digital system.

Decisions that would normally be made by a human artist are delegated to an automated procedure in whole or in part.

Instead, by describing the inputs and rule sets to be followed, the artist generally maintains some influence over the process.

Georg Nees, Frieder Nake, and A. Michael Noll are usually acknowledged as the inventors of visual computer art.

The "3N" group of computer pioneers is sometimes referred to as a unit.

Georg Nees is widely credited with the founding of the first generative art exhibition, Computer Graphic, in Stuttgart in 1965.

In the same year, exhibitions by Nake (in cooperation with Nees) and Noll were held in Stuttgart and New York City, respectively (Boden and Edmonds 2009).

In their use of computers to generate works of art, these early examples of generative art in the visual media are groundbreaking.

They were also constrained by the existing research methodologies at the time.

In today's world, the availability of AI-based technology, along with exponential advances in processing power, has resulted in the emergence of new forms of generative art.

Computational creativity, described as "a discipline of artificial intelligence focused on designing agents that make creative goods autonomously," is an intriguing subset of these new efforts (Davis et al. 2016).

When it comes to generative art, the purpose of computational creativity is to use machine learning methods to tap into a computer's creative potential.

In this approach, the creativity process shifts away from giving a computer step-by-step instructions (as was the case in the early days) and toward more abstract procedures with unpredictable outputs.

The DeepDream computer vision software, invented by Google developer Alexander Mordvintsev in 2015, is a modern example of computational innovation.

A convolutional neural network is used in this project to purposefully over-process a picture.

This brings forward patterns that correspond to how a certain layer in the network interprets an input picture based on the image types it has been taught to recognize.

The end effect is psychedelic reinterpretations of the original picture, comparable to what one may see in a restless night's sleep.

Mordvintsev demonstrates how a neural network trained on a set of animals can take images of clouds and convert them into rough animal representations that match the detected features.

Using a different training set, the network would transform elements like horizon lines and towering vertical structures into squiggly representations of skyscrapers and buildings.

As a result, these new pictures might be regarded unexpected unique pieces of art made entirely by the computer's own creative process based on a neural network.

Another contemporary example of computational creativity is My Artificial Muse.

Unlike DeepDream, which depends entirely on a neural network to create art, Artificial Muse investigates how an AI-based method might cooperate with a human to inspire new paintings (Barqué-Duran et al. 2018).

The neural network is trained using a massive collection of human postures culled from existing photos and rendered as stick figures.

The data is then used to build an entirely new position, which is then given back into the algorithm, which reconstructs what it believes a painting based on this stance should look like.

As a result, the new stance might be seen as a muse for the algorithm, inspiring it to produce an entirely unique picture, which is subsequently executed by the artist.

Two-dimensional computer-aided drafting (CAD) systems were the first to integrate computers into the field of architecture, and they were used to directly imitate the job of hand sketching.

Although using a computer to create drawings was still a manual process, it was seen to be an advance over the analogue method since it allowed for more accuracy and reproducibility.

More complicated parametric design software, which takes a more programmed approach to the construction of an architectural model, soon exceeded these rudimentary CAD applications (i.e., geometry is created through user-specified variables).

Today, the most popular platform for this sort of work is Grasshopper (a plugin for the three-dimensional computer-aided design software Rhino), which was created by David Rutten in 2007 while working at Robert McNeel & Associates.

Take, for example, defining a rectangle, which is a pretty straightforward geometric problem.

The length and breadth values would be created as user-controlled parameters in a parametric modeling technique.

The program would automatically change the final design (i.e., the rectangle drawing) based on the parameter values provided.

Imagine this on a bigger scale, where a set of parameters connects a complicated collection of geometric representations (e.g., curves, surfaces, planes, etc.).

As a consequence, basic user-specified parameters may be used to determine the output of a complicated geometric design.

An further advantage is that parameters interact in unexpected ways, resulting in results that a creator would not have imagined.

Despite the fact that parametric design uses a computer to produce and display complicated results, the process is still manual.

A set of parameters must be specified and controlled by a person.

The computer or program that performs the design computations is given more agency in generative design methodologies.

Neural networks may be trained on examples of designs that meet a project's general aims, and then used to create multiple design proposals using fresh input data.

A recent example of generative design in an architectural environment is the layout of the new Autodesk headquarters in Toronto's MaRS Innovation District (Autodesk 2016).

Existing workers were polled as part of this initiative, and data was collected on six quantifiable goals: work style preference, adjacency preference, degree of distraction, interconnection, daylight, and views to the outside.

All of these requirements were taken into account by the generative design algorithm, which generated numerous office arrangements that met or exceeded the stated standards.

These findings were analyzed, and the highest-scoring ones were utilized to design the new workplace arrangement.

In this approach, a huge quantity of data was utilized to build a final optimal design, including prior projects and user-specified data.

The data linkages would have been too complicated for a person to comprehend, and could only be fully explored through a generative design technique.

In a broad variety of applications where a designer wants to explore a big solution area, generative design techniques have shown to be beneficial.

It avoids the issue of concentrating on a single solution too early in the design phase by allowing for creative explorations of a variety of possibilities.

As AI-based computational approaches develop, generative design will find new uses.

Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Computational Creativity.

Further Reading:

Autodesk. 2016. “Autodesk @ MaRS.” Autodesk Research.

Barqué-Duran, Albert, Mario Klingemann, and Marc Marzenit. 2018. “My Artificial Muse.”

Boden, Margaret A., and Ernest A. Edmonds. 2009. “What Is Generative Art?” Digital Creativity 20, no. 1–2: 21–46.

Davis, Nicholas, Chih-Pin Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. “Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent.” In Proceedings of the 21st International Conference on Intelligent User Interfaces—IUI ’16, 196–207. Sonoma, CA: ACM Press.

Menges, Achim, and Sean Ahlquist, eds. 2011. Computational Design Thinking: Computation Design Thinking. Chichester, UK: J. Wiley & Sons.

Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. 2015. “Inceptionism: Going Deeper into Neural Networks.” Google Research Blog.

Nagy, Danil, and Lorenzo Villaggi. 2017. “Generative Design for Architectural Space Planning.”

Picon, Antoine. 2010. Digital Culture in Architecture: An Introduction for the Design Professions. Basel, Switzerland: Birkhäuser Architecture.

Rutten, David. 2007. “Grasshopper: Algorithmic Modeling for Rhino.”

Artificial Intelligence - What Is Computational Creativity?


Computational Creativity is a term used to describe a kind of creativity that is based on Computer-generated art is connected to computational creativity, although it is not reducible to it.

According to Margaret Boden, "CG-art" is an artwork that "results from some computer program being allowed to operate on its own, with zero input from the human artist" (Boden 2010, 141).

This definition is both severe and limiting, since it is confined to the creation of "art works" as defined by human observers.

Computational creativity, on the other hand, is a broader phrase that encompasses a broader range of actions, equipment, and outputs.

"Computational creativity is an area of Artificial Intelligence (AI) study... where we construct and engage with computational systems that produce products and ideas," said Simon Colton and Geraint A. Wiggins.

Those "artefacts and ideas" might be works of art, as well as other things, discoveries, and/or performances (Colton and Wiggins 2012, 21).

Games, narrative, music composition and performance, and visual arts are examples of computational creativity applications and implementations.

Games and other cognitive skill competitions are often used to evaluate and assess machine skills.

The fundamental criterion of machine intelligence, in fact, was established via a game, which Alan Turing dubbed "The Game of Imitation" (1950).

Since then, AI progress and accomplishment have been monitored and evaluated via games and other human-machine contests.

Chess has had a special status and privileged position among all the games in which computers have been involved, to the point where critics such as Douglas Hofstadter (1979, 674) and Hubert Dreyfus (1992) confidently asserted that championship-level AI chess would forever remain out of reach and unattainable.

After beating Garry Kasparov in 1997, IBM's Deep Blue modified the game's rules.

But chess was just the start.

In 2015, AlphaGo, a Go-playing algorithm built by Google DeepMind, defeated Lee Sedol, one of the most famous human players of this notoriously tough board game, in four out of five games.

Human observers, including as Fan Hui (2016), have praised AlphaGo's nimble play as "beautiful," "intuitive," and "innovative." 'Automated Insights' is a service provided by Automated Insights Natural Language Generation (NLG) techniques such as Wordsmith and Narrative Science's Quill are used to create human-readable tales from machine-readable data.

Unlike basic news aggregators or template NLG systems, these computers "write" (or "produce," as the case may be) unique tales that are almost indistinguishable from human-created material in many cases.

Christer Clerwall, for example, performed a small-scale research in 2014 in which human test subjects were asked to assess news pieces written by Wordsmith and a professional writer from the Los Angeles Times.

The study's findings reveal that, although software-generated information is often seen as descriptive and dull, it is also regarded as more impartial and trustworthy (Clerwall 2014, 519).

"Within 10 years, a digital computer would produce music regarded by critics as holding great artistic merit," Herbert Simon and Allen Newell predicted in their famous article "Heuristic Problem Solving" (1958). (Simon and Newell 1958, 7).

This prediction has come true.

Experiments in Musical Intelligence (EMI, or "Emmy") by David Cope is one of the most well-known works in the subject of "algorithmic composition." 

Emmy is a computer-based algorithmic composer capable of analyzing existing musical compositions, rearranging their fundamental components, and then creating new, unique scores that sound like and, in some circumstances, are indistinguishable from Mozart, Bach, and Chopin's iconic masterpieces (Cope 2001).

There are robotic systems in music performance, such as Shimon, a marimba-playing jazz-bot from Georgia Tech University, that can not only improvise with human musicians in real time, but also "is designed to create meaningful and inspiring musical interactions with humans, leading to novel musical experiences and outcomes" (Hoffman and Weinberg 2011).

Cope's method, which he refers to as "recombinacy," is not restricted to music.

It may be used and applied to any creative technique in which new works are created by reorganizing or recombining a set of finite parts, such as the alphabet's twenty-six letters, the musical scale's twelve tones, the human eye's sixteen million colors, and so on.

As a result, other creative undertakings, like as painting, have adopted similar computational creativity method.

The Painting Fool is an automated painter created by Simon Colton that seeks to be "considered seriously as a creative artist in its own right" (Colton 2012, 16).

To far, the algorithm has generated thousands of "original" artworks, which have been shown in both online and physical art exhibitions.

Obvious, a Paris-based collaboration comprised of the artists Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernie, uses a generative adversarial network (GAN) to create portraits of a fictitious family (the Belamys) in the manner of the European masters.

Christies auctioned one of these pictures, "Portrait of Edmond Belamy," for $432,500 in October 2018.

Designing ostensibly creative systems instantly runs into semantic and conceptual issues.

Creativity is an enigmatic phenomena that is difficult to pinpoint or quantify.

Are these programs, algorithms, and systems really "creative," or are they merely a sort of "imitation," as some detractors have labeled them? This issue is similar to John Searle's (1984, 32–38) Chinese Room thought experiment, which aimed to highlight the distinction between genuine cognitive activity, such as creative expression, and simple simulation or imitation.

Researchers in the field of computational creativity have introduced and operationalized a rather specific formulation to characterize their efforts: "The philosophy, science, and engineering of computational systems that, by taking on specific responsibilities, exhibit behaviors that unbiased observers would deem creative" (Colton and Wig gins 2012, 21).

The key word in this description is "responsibility." 

"The term responsibilities highlights the difference between the systems we build and creativity support tools studied in the HCI [human-computer interaction] community and embedded in tools like Adobe's Photoshop, to which most observers would probably not attribute creative intent or behavior," Colton and Wiggins explain (Colton and Wiggins 2012, 21).

"The program is only a tool to improve human creativity" (Colton 2012, 3–4) using a software application like Photoshop; it is an instrument utilized by a human artist who is and remains responsible for the creative choices and output created by the instrument.

Computational creativity research, on the other hand, "seeks to develop software that is creative in and of itself" (Colton 2012, 4).

On the one hand, one might react as we have in the past, dismissing contemporary technological advancements as simply another instrument or tool of human action—or what technology philosophers such as Martin Heidegger (1977) and Andrew Feenberg (1991) refer to as "the instrumental theory of technology." 

This is, in fact, the explanation supplied by David Cope in his own appraisal of his work's influence and relevance.

Emmy and other algorithmic composition systems, according to Cope, do not compete with or threaten to replace human composition.

They are just instruments used in and for musical creation.

"Computers represent just instruments with which we stretch our ideas and bodies," writes Cope.

Computers, programs, and the data utilized to generate their output were all developed by humanity.

Our algorithms make music that is just as much ours as music made by our greatest human inspirations" (Cope 2001, 139).

According to Cope, no matter how much algorithmic mediation is invented and used, the musical composition generated by these advanced digital tools is ultimately the responsibility of the human person.

The similar argument may be made for other supposedly creative programs, such as AlphaGo, a Go-playing algorithm, or The Painting Fool, a painting software.

When AlphaGo wins a big tournament or The Painting Fool creates a spectacular piece of visual art that is presented in a gallery, there is still a human person (or individuals) who is (or can reply or answer for) what has been created, according to the argument.

The attribution lines may get more intricate and drawn out, but there is always someone in a position of power behind the scenes, it might be claimed.

In circumstances where efforts have been made to transfer responsibility to the computer, evidence of this already exists.

Consider AlphaGo's game-winning move 37 versus Lee Sedol in game two.

If someone wants to learn more about the move and its significance, AlphaGo is the one to ask.

The algorithm, on the other hand, will remain silent.

In actuality, it was up to the human programmers and spectators to answer on AlphaGo's behalf and explain the importance and effect of the move.

As a result, as Colton (2012) and Colton et al. (2015) point out, if the mission of computational creativity is to succeed, the software will have to do more than create objects and behaviors that humans interpret as creative output.

It must also take ownership of the task by accounting for what it accomplished and how it did it.

"The software," Colton and Wiggins argue, "should be available for questioning about its motivations, processes, and products," eventually capable of not only generating titles for and explanations and narratives about the work but also responding to questions by engaging in critical dialogue with its audience (Colton and Wiggins 2012, 25). (Colton et al. 2015, 15).

At the same time, these algorithmic incursions into what had previously been a protected and solely human realm have created possibilities.

It's not only a question of whether computers, machine learning algorithms, or other applications can or cannot be held accountable for what they do or don't do; it's also a question of how we define, explain, and define creative responsibility in the first place.

This suggests that there is a strong and weak component to this endeavor, which Mohammad Majid al-Rifaie and Mark Bishop refer to as strong and weak forms of computational creativity, reflecting Searle's initial difference on AI initiatives (Majid al-Rifaie and Bishop 2015, 37).

The types of application development and demonstrations presented by people and companies such as DeepMind, David Cope, and Simon Colton are examples of the "strong" sort.

However, these efforts have a "weak AI" component in that they simulate, operationalize, and stress test various conceptualizations of artistic responsibility and creative expression, resulting in critical and potentially insightful reevaluations of how we have defined these concepts in our own thinking.

Nothing has made Douglas Hofstadter reexamine his own thinking about thinking more than the endeavor to cope with and make sense of David Cope's Emmy nomination (Hofstadter 2001, 38).

To put it another way, developing and experimenting with new algorithmic capabilities does not necessarily detract from human beings and what (hopefully) makes us unique, but it does provide new opportunities to be more precise and scientific about these distinguishing characteristics and their limits.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

AARON; Automatic Film Editing; Deep Blue; Emily Howell; Generative Design; Generative Music and Algorithmic Composition.

Further Reading

Boden, Margaret. 2010. Creativity and Art: Three Roads to Surprise. Oxford, UK: Oxford University Press.

Clerwall, Christer. 2014. “Enter the Robot Journalist: Users’ Perceptions of Automated Content.” Journalism Practice 8, no. 5: 519–31.

Colton, Simon. 2012. “The Painting Fool: Stories from Building an Automated Painter.” In Computers and Creativity, edited by Jon McCormack and Mark d’Inverno, 3–38. Berlin: Springer Verlag.

Colton, Simon, Alison Pease, Joseph Corneli, Michael Cook, Rose Hepworth, and Dan Ventura. 2015. “Stakeholder Groups in Computational Creativity Research and Practice.” In Computational Creativity Research: Towards Creative Machines, edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 3–36. Amster￾dam: Atlantis Press.

Colton, Simon, and Geraint A. Wiggins. 2012. “Computational Creativity: The Final Frontier.” In Frontiers in Artificial Intelligence and Applications, vol. 242, edited by Luc De Raedt et al., 21–26. Amsterdam: IOS Press.

Cope, David. 2001. Virtual Music: Computer Synthesis of Musical Style. Cambridge, MA: MIT Press.

Dreyfus, Hubert L. 1992. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press.

Feenberg, Andrew. 1991. Critical Theory of Technology. Oxford, UK: Oxford University Press.

Heidegger, Martin. 1977. The Question Concerning Technology, and Other Essays. Translated by William Lovitt. New York: Harper & Row.

Hoffman, Guy, and Gil Weinberg. 2011. “Interactive Improvisation with a Robotic Marimba Player.” Autonomous Robots 31, no. 2–3: 133–53.

Hofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books.

Hofstadter, Douglas R. 2001. “Staring Emmy Straight in the Eye—And Doing My Best Not to Flinch.” In Virtual Music: Computer Synthesis of Musical Style, edited by David Cope, 33–82. Cambridge, MA: MIT Press.

Hui, Fan. 2016. “AlphaGo Games—English. DeepMind.”

Majid al-Rifaie, Mohammad, and Mark Bishop. 2015. “Weak and Strong Computational Creativity.” In Computational Creativity Research: Towards Creative Machines, edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 37–50. Amsterdam: Atlantis Press.

Searle, John. 1984. Mind, Brains and Science. Cambridge, MA: Harvard University Press.

Artificial Intelligence - What Is The AARON Computer Program?


Harold Cohen built AARON, a computer program that allows him to produce paintings.

The initial version was created "about 1972," according to Cohen.

Because AARON is not open source, its development came to a halt when Cohen died in 2016.

In 2014, AARON was still creating fresh photos, and its functioning was still visible in 2016.

AARON is not an abbreviation.

The name was chosen since it is the first letter of the alphabet, and Cohen anticipated that he would eventually build further programs, which he never did.

AARON has various versions during the course of its four decades of development, each with its own set of capabilities.

Earlier versions could only generate black-and-white line drawings, while later versions could also paint in color.

Some AARON versions were set up to make abstract paintings, while others were set up to create scenes with objects and people.

AARON's main goal was to generate not just computer pictures, but also physical, large-scale images or paintings.

The lines made by AARON, a program written in C at the time, were traced directly on the wall in Cohen's show at the San Francisco Museum of Modern Art.

The software was then paired with a machine that had a robotic arm and could apply paint on canvas in later creative episodes of AARON.

For example, the version of AARON on display at Boston's Computer Museum in 1995, which was written in LISP at the time and ran on a Silicon Graphics computer, generated a file containing a set of instructions.

After then, the file was transmitted to a PC that was running a C++ program.

This computer was equipped with a robotic arm.

The C++ code processed the commands and controlled the arm's movement, as well as the dye mixing and application to the canvas.

Cohen's drawing and painting devices were also significant advancements.

Industrial inkjet printers were employed in subsequent generations as well.

Because of the colors these new printers could create, Cohen considered this configuration of AARON to be the most advanced; he thought that the inkjet was the most important innovation since the industrial revolution when it came to colors.

While Cohen primarily concentrated on tactile pictures, Ray Kurzweil built a screensaver version of AARON around the year 2000.

By 2016, Cohen had developed his own version of AARON, which produced black-and-white pictures that the user could color using a big touch screen.

"Fingerpainting," he called it.

AARON, according to Cohen, is neither a "totally independent artist" nor completely creative.

He did feel, however, that AARON demonstrates one requirement of autonomy: emergence, which Cohen defines as "paintings that are really shocking and unique." Cohen never got too far into AARON's philosophical ramifications.

It's easy to infer that AARON's work as a colorist was his greatest accomplishment, based on the amount of time he devotes to it in practically all of the interviews conducted with him.

Computational Creativity and Generative Design are two more terms for the same thing.

~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: Computational Creativity; Generative Design.

Further Reading

Cohen, Harold. 1995. “The Further Exploits of AARON, Painter.” Stanford Humanities Review 4, no. 2 (July): 141–58.

Cohen, Harold. 2004. “A Sorcerer’s Apprentice: Art in an Unknown Future.” Invited talk at Tate Modern, London.

Cohen, Paul. 2016. “Harold Cohen and AARON.” AI Magazine 37, no. 4 (Winter): 63–66.

McCorduck, Pamela. 1990. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. New York: W. H. Freeman.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...