Showing posts sorted by date for query Autonomous. Sort by relevance Show all posts
Showing posts sorted by date for query Autonomous. Sort by relevance Show all posts

Artificial Intelligence - Who Is Mark Tilden?

 


Mark Tilden(1961–) is a biomorphic robot freelance designer from Canada.

A number of his robots are sold as toys.

Others have appeared in television and cinema as props.

Tilden is well-known for his opposition to the notion that strong artificial intelligence is required for complicated robots.

Tilden is a forerunner in the field of BEAM robotics (biology, electronics, aesthetics, and mechanics).

To replicate biological neurons, BEAM robots use analog circuits and systems, as well as continuously varying signals, rather as digital electronics and microprocessors.

Biomorphic robots are programmed to change their gaits in order to save energy.

When such robots come into impediments or changes in the underlying terrain, they are knocked out of their lowest energy condition, forcing them to adapt to a new walking pattern.

The mechanics of the underlying machine rely heavily on self-adaptation.

After failing to develop a traditional electronic robot butler in the late 1980s, Tilden resorted to BEAM type robots.

The robot could barely vacuum floors after being programmed with Isaac Asimov's Three Laws of Robotics.



After hearing MIT roboticist Rodney Brooks speak at Waterloo University on the advantages of basic sensorimotor, stimulus-response robotics versus computationally complex mobile devices, Tilden completely abandoned the project.

Til den left Brooks' lecture questioning if dependable robots might be built without the use of computer processors or artificial intelligence.

Rather than having the intelligence written into the robot's programming, Til den hypothesized that the intelligence may arise from the robot's operating environment, as well as the emergent features that resulted from that world.

Tilden studied and developed a variety of unusual analog robots at the Los Alamos National Laboratory in New Mexico, employing fast prototyping and off-the-shelf and cannibalized components.



Los Alamos was looking for robots that could operate in unstructured, unpredictable, and possibly hazardous conditions.

Tilden built almost a hundred robot prototypes.

His SATBOT autonomous spaceship prototype could align itself with the Earth's magnetic field on its own.

He built fifty insectoid robots capable of creeping through minefields and identifying explosive devices for the Marine Corps Base Quantico.

A robot known as a "aggressive ashtray" spits water at smokers.

A "solar spinner" was used to clean the windows.

The actions of an ant were reproduced by a biomorph made from five broken Sony Walkmans.

Tilden started building Living Machines powered by solar cells at Los Alamos.

These machines ran at extremely sluggish rates due to their energy source, but they were dependable and efficient for lengthy periods of time, often more than a year.

Tilden's first robot designs were based on thermodynamic conduit engines, namely tiny and efficient solar engines that could fire single neurons.

Rather than the workings of their brains, his "nervous net" neurons controlled the rhythms and patterns of motion in robot bodies.

Tilden's idea was to maximize the amount of patterns conceivable while using the fewest number of implanted transistors feasible.

He learned that with just twelve transistors, he could create six different movement patterns.

Tilden might replicate hopping, leaping, running, sitting, crawling, and a variety of other patterns of behavior by folding the six patterns into a figure eight in a symmetrical robot chassis.

Since then, Tilden has been a proponent of a new set of robot principles for such survivalist wild automata.

Tilden's Laws of Robotics say that (1) a robot must safeguard its survival at all costs; (2) a robot must get and keep access to its own power source; and (3) a robot must always seek out better power sources.

Tilden thinks that wild robots will be used to rehabilitate ecosystems that have been harmed by humans.

Tilden had another breakthrough when he introduced very inexpensive robots as toys for the general public and robot aficionados.

He wanted his robots to be in the hands of as many people as possible, so that hackers, hobbyists, and members of different maker communities could reprogramme and modify them.

Tilden designed the toys in such a way that they could be dismantled and analyzed.

They might be hacked in a basic way.

Everything is color-coded and labeled, and all of the wires have gold-plated contacts that can be ripped apart.

Tilden is presently working with WowWee Toys in Hong Kong on consumer-oriented entertainment robots:

  • B.I.O. Bugs, Constructobots, G.I. Joe Hoverstrike, Robosapien, Roboraptor, Robopet, Roborep tile, Roboquad, Roboboa, Femisapien, and Joebot are all popular WowWee robot toys.
  • The Roboquad was designed for the Jet Propulsion Laboratory's (JPL) Mars exploration program.
  • Tilden is also the developer of the Roomscooper cleaning robot.


WowWee Toys sold almost three million of Tilden's robot designs by 2005.


Tilden made his first robotic doll when he was three years old.

At the age of six, he built a Meccano suit of armor for his cat.

At the University of Waterloo, he majored in Systems Engineering and Mathematics.


Tilden is presently working on OpenCog and OpenCog Prime alongside artificial intelligence pioneer Ben Goertzel.


OpenCog is a worldwide initiative supported by the Hong Kong government that aims to develop an open-source emergent artificial general intelligence framework as well as a common architecture for embodied robotic and virtual cognition.

Dozens of IT businesses across the globe are already using OpenCog components.

Tilden has worked on a variety of films and television series as a technical adviser or robot designer, including Lara Croft: Tomb Raider (2001), The 40-Year-Old Virgin (2005), Paul Blart Mall Cop (2009), and X-Men: The Last Stand (2006).

In the Big Bang Theory (2007–2019), his robots are often displayed on the bookshelves of Sheldon's apartment.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 

Brooks, Rodney; Embodiment, AI and.


References And Further Reading

Frigo, Janette R., and Mark W. Tilden. 1995. “SATBOT I: Prototype of a Biomorphic Autonomous Spacecraft.” Mobile Robotics, 66–75.

Hapgood, Fred. 1994. “Chaotic Robots.” Wired, September 1, 1994. https://www.wired.com/1994/09/tilden/.

Hasslacher, Brosl, and Mark W. Tilden. 1995. “Living Machines.” Robotics and Autonomous Systems 15, no. 1–2: 143–69.

Marsh, Thomas. 2010. “The Evolution of a Roboticist: Mark Tilden.” Robot Magazine, December 7, 2010. http://www.botmag.com/the-evolution-of-a-roboticist-mark-tilden.

Menzel, Peter, and Faith D’Aluisio. 2000. “Biobots.” Discover Magazine, September 1, 2000. https://www.discovermagazine.com/technology/biobots.

Rietman, Edward A., Mark W. Tilden, and Manor Askenazi. 2003. “Analog Computation with Rings of Quasiperiodic Oscillators: The Microdynamics of Cognition in Living Machines.” Robotics and Autonomous Systems 45, no. 3–4: 249–63.

Samans, James. 2005. The Robosapiens Companion: Tips, Tricks, and Hacks. New York: Apress.



ISRO Shukrayaan-1 Venus Mission



    The Indian Space Research Organization has a long history of awe-inspiring the rest of the world by completing space missions at remarkably inexpensive prices. 


    In keeping with this tradition, the ISRO has set its sights on a Venus mission that would cost between Rs 500 and Rs 1,000 crore.


    "The price will be determined by the level of instrumentation. ISRO chairman S Somanath said, "If you install a lot of payload sensors, the cost would automatically go up."


    While foreign space organizations such as NASA spend vast sums of money on space missions, the ISRO prefers to focus on low-cost projects. 

    ISRO's Chandrayan-1 was a low-cost spacecraft developed for about Rs 386 crore. 


    The Chandrayaan-2 mission cost Rs 603 crore to develop, and Rs 367 crore to launch. (1 million USD is roughly = 7.8 Crores INR in 2022)


    The ISRO chairman said the agency is in the process of approaching the Union government for authorization for the mission, speaking to the media on the sidelines of a national conference on Aerospace Quality and Reliability.




    In response to concerns, he said that the timetable for Chandrayan-3 is still being worked out. 


    Following its Moon and Mars expeditions, the ISRO is considering a Venus trip. 

    Despite speculations that the ISRO is aiming a December 2024 launch window for the Venus mission, Somanath stated the timeline has yet to be finalized. 

    It would only be disclosed when the Union government had given its final approval. 


    The ISRO has worked hard to guarantee that it would be a one-of-a-kind mission. 


    "We have to be cautious with such pricey missions," he warned.

    "We don't want to conduct a Venus expedition just for the fun of it. 

    We're doing it because of the distinct identity that this mission will establish among all future Venus expeditions. 

    "That's the aim," Somanath said, adding that the mission would create a lot of data that scientists could use. 


    Despite the fact that the timetable has yet to be disclosed, the ISRO is well prepared. 

    "The technology definition, task package, scheduling, and procurement are all complete. But then it needs to go to the government, which will review it and ultimately approve it," he said. 

    According to him, Chandrayan 3 is now undergoing testing for navigation, instrumentation, and ground simulations. 

    However, no timetable has been established.





    India is preparing to enter the race to get to Venus alongside the US and many other nations after successfully completing Moon and Mars missions. 


    The mission's goal will be to investigate Venus's poisonous and corrosive atmosphere, which is characterized by clouds of sulfuric acid that blanket the planet.

    S Somanath, the head of Isro, said the project has been in the works for years and that the space agency is now "ready to launch an orbiter to Venus." "The project report is complete, the general plans are complete, and the funds have been identified. 

    "Building and launching a mission to Venus in a very short period of time is doable for India since the capacity exists now," the Isro chairman stated during a daylong seminar on Venusian research.




    The Indian Space Research Organization (ISRO) is a Venus orbiter called designed to examine the planet's surface and atmosphere.


    In 2017, funds were given to finish early investigations, and instrument tenders were announced.

    The orbiter's scientific payload capabilities, depending on its ultimate design, would be about 100 kilograms (220 lb) with 500 W of power.

    At periapsis, the elliptical orbit around Venus is projected to be 500 kilometers (310 miles) long and 60,000 kilometers (37,000 miles) long. 



    Payload for science.





    The scientific payload will be 100 kg (220 lb) in weight and would include equipment from India and other nations. 

    Indian payloads and 7 foreign payloads have been shortlisted as of December 2019. 



    Instruments from India


    • Venus SAR L&S-Band
    • VARTISS (HF radar)
    • VSEAM (Surface Emissivity) (Surface Emissivity)
    • VCMC (VTC (Thermal Camera)) (Cloud Monitoring)
    • LIVE (Lightning Sensor)


    • VASP (Spectro Polarimeter)
    • SPAV (Solar occultation photometry)
    • NAVA (Airglow imager)
    • RAVI (RO Experiment)
    • * ETA (Electron Temperature Analyzer)
    • RPA (Retarding Potential Analyzer)
    • Spectrometer of mass
    • (Plasma Analyzer)* VISWAS


    • VREM (Radiation Environment)
    • SSXS (Solar Soft X-ray Spectrometer )
    • VIPER (Plasma Wave Detector)
    • VODEX (Dust experiment)
    • * Collaboration with Germany and Sweden is envisaged for RAVI and VISWAS. 




    International Payloads



      • Space Research Institute, Moscow, and LATMOS, France developed VIRAL (Venus Infrared Atmospheric Gas Linker).
      • IVOLGA is a laser heterodyne NIR spectrometer used to investigate the structure and dynamics of Venus's mesosphere.


    Overview Of The ISRO Shukrayaan Mission


    Surface/subsurface stratigraphy and resurfacing processes are among the three broad research areas for this mission; second, study atmospheric chemistry, dynamics, and compositional variations; and third, study solar irradiance and solar wind interaction with Venus' ionosphere while studying the structure, composition, and dynamics of the atmosphere.





    Shukrayaan Mission Inception, History And Status


    ISRO has been researching the possibility of future interplanetary missions to Mars and Venus, Earth's nearest planetary neighbors, based on the success of Chandrayaan and the Mangalyaan. 


    • The Venus mission proposal was initially proposed in 2012 at a Tirupati space meet. 
    • The Indian government increased funding for the Department of Space by 23% in its 2017–18 budget. 
    • The budget specifies funds "for Mars Orbiter Mission II and Mission to Venus" under the space sciences department, and it was approved to perform preliminary investigations after the 2017–18 request for funding. 



    ISRO issued a 'Announcement of Opportunity' (AO) on April 19, 2017, requesting scientific payload ideas from Indian universities based on wide mission parameters.


    ISRO issued another 'Announcement of Opportunity' on November 6, 2018, soliciting payload applications from the worldwide scientific community. 

    The allowable scientific payload capacity was reduced from 175 kg in the first AO to 100 kg. 




    In 2018, India's ISRO and France's CNES had talks about collaborating on this mission and developing autonomous navigation and aerobraking technology together.


    • In addition, using his knowledge from the Vega mission, French astronomer Jacques Blamont indicated interest in using inflated balloons to examine the Venusian atmosphere to U R Rao. 
    • These instrumented balloons may be launched from an orbiter and gather long-term observations while floating in the planet's comparatively benign upper atmosphere, similar to the Vega missions. 
    • ISRO agreed to investigate a proposal to research the Venusian atmosphere at 55 kilometers (34 miles) altitude with a balloon probe carrying a 10 kilogram (22 pound) payload. 






    The Venus project is still in the configuration research phase as of late 2018, and ISRO has not yet received complete sanction from the Indian government.


    In 2019, IUCAA Director Somak Raychaudhury announced that a drone-like probe was being considered as part of the mission. 

    ISRO scientist T Maria Antonita stated in a report to NASA's Decadal Planetary Science Committee that the launch would take place in December 2024. 

    She also said that a backup date in 2026 exists. 



    ISRO has selected 20 foreign bids as of November 2020, including collaborations with Russia, France, Sweden, and Germany. 


    ISRO and the Swedish Institute of Space Physics are working together on the Shukrayaan-1 project. 

    ISRO chairman S. Somanath indicated in May 2022 that the mission will launch in December 2024, with a backup launch window in 2031.



    Shukrayaan Mission Salient Features





    Type of mission Shukrayaan-1: Venus orbiter

    Operator: ISRO

    Planned mission duration: 4 years


    Spacecraft characteristics:


    Manufacturer: ISAC

    2,500 kg launch mass (5,500 lb)

    100 kilogram payload mass (220 lb)

    Payload power is 500 watts (0.67 horsepower).

    December 2024 is the scheduled launch date (planned)

    Launch Vehicle: GSLV Mark II rocket


    SDSC SHAR Contractor : ISRO Launch Site


    Missions Primary Components:

    • Orbiter of Venus
    • Atmospheric probe for Venus
    • Aerobot balloon is a spacecraft component.

    ~ Jai Krishna Ponnappan.


    AI - What Is Superintelligence AI? Is Artificial Superintelligence Possible?

     


     

    In its most common use, the phrase "superintelligence" refers to any degree of intelligence that at least equals, if not always exceeds, human intellect, in a broad sense.


    Though computer intelligence has long outperformed natural human cognitive capacity in specific tasks—for example, a calculator's ability to swiftly interpret algorithms—these are not often considered examples of superintelligence in the strict sense due to their limited functional range.


    In this sense, superintelligence would necessitate, in addition to artificial mastery of specific theoretical tasks, some kind of additional mastery of what has traditionally been referred to as practical intelligence: a generalized sense of how to subsume particulars into universal categories that are in some way worthwhile.


    To this day, no such generalized superintelligence has manifested, and hence all discussions of superintelligence remain speculative to some degree.


    Whereas traditional theories of superintelligence have been limited to theoretical metaphysics and theology, recent advancements in computer science and biotechnology have opened up the prospect of superintelligence being materialized.

    Although the timing of such evolution is hotly discussed, a rising body of evidence implies that material superintelligence is both possible and likely.


    If this hypothesis is proved right, it will very certainly be the result of advances in one of two major areas of AI research


    1. Bioengineering 
    2. Computer science





    The former involves efforts to not only map out and manipulate the human DNA, but also to exactly copy the human brain electronically through full brain emulation, also known as mind uploading.


    The first of these bioengineering efforts is not new, with eugenics programs reaching back to the seventeenth century at the very least.

    Despite the major ethical and legal issues that always emerge as a result of such efforts, the discovery of DNA in the twentieth century, together with advances in genome mapping, has rekindled interest in eugenics.

    Much of this study is aimed at gaining a better understanding of the human brain's genetic composition in order to manipulate DNA code in the direction of superhuman intelligence.



    Uploading is a somewhat different, but still biologically based, approach to superintelligence that aims to map out neural networks in order to successfully transfer human intelligence onto computer interfaces.


    • The brains of insects and tiny animals are micro-dissected and then scanned for thorough computer analysis in this relatively new area of study.
    • The underlying premise of whole brain emulation is that if the brain's structure is better known and mapped, it may be able to copy it with or without organic brain tissue.



    Despite the fast growth of both genetic mapping and whole brain emulation, both techniques have significant limits, making it less likely that any of these biological approaches will be the first to attain superintelligence.





    The genetic alteration of the human genome, for example, is constrained by generational constraints.

    Even if it were now feasible to artificially boost cognitive functioning by modifying the DNA of a human embryo (which is still a long way off), it would take an entire generation for the changed embryo to evolve into a fully fledged, superintelligent human person.

    This would also imply that there are no legal or moral barriers to manipulating the human DNA, which is far from the fact.

    Even the comparatively minor genetic manipulation of human embryos carried done by a Chinese physician as recently as November 2018 sparked international outrage (Ramzy and Wee 2019).



    Whole brain emulation, on the other hand, is still a long way off, owing to biotechnology's limits.


    Given the current medical technology, the extreme levels of accuracy necessary at every step of the uploading process are impossible to achieve.

    Science and technology currently lack the capacity to dissect and scan human brain tissue with sufficient precision to produce full brain simulation results.

    Furthermore, even if such first steps are feasible, researchers would face significant challenges in analyzing and digitally replicating the human brain using cutting-edge computer technology.




    Many analysts believe that such constraints will be overcome, although the timeline for such realizations is unknown.



    Apart from biotechnology, the area of AI, which is strictly defined as any type of nonorganic (particularly computer-based) intelligence, is the second major path to superintelligence.

    Of course, the work of creating a superintelligent AI from the ground up is complicated by a number of elements, not all of which are purely logistical in nature, such as processing speed, hardware/software design, finance, and so on.

    In addition to such practical challenges, there is a significant philosophical issue: human programmers are unable to know, and so cannot program, that which is superior to their own intelligence.





    Much contemporary research on computer learning and interest in the notion of a seed AI is motivated in part by this worry.


    Any machine capable of changing reactions to stimuli based on an examination of how well it performs in relation to a predetermined objective is defined as the latter.

    Importantly, the concept of a seed AI entails not only the capacity to change its replies by extending its base of content knowledge (stored information), but also the ability to change the structure of its programming to better fit a specific job (Bostrom 2017, 29).

    Indeed, it is this latter capability that would give a seed AI what Nick Bostrom refers to as "recursive self-improvement," or the ability to evolve iteratively (Bostrom 2017, 29).

    This would eliminate the requirement for programmers to have an a priori vision of super intelligence since the seed AI would constantly enhance its own programming, with each more intelligent iteration writing a superior version of itself (beyond the human level).

    Such a machine would undoubtedly cast doubt on the conventional philosophical assumption that robots are incapable of self-awareness.

    This perspective's proponents may be traced all the way back to Descartes, but they also include more current thinkers like John Haugeland and John Searle.



    Machine intelligence, in this perspective, is defined as the successful correlation of inputs with outputs according to a predefined program.




    As a result, robots differ from humans in type, the latter being characterized only by conscious self-awareness.

    Humans are supposed to comprehend the activities they execute, but robots are thought to carry out functions mindlessly—that is, without knowing how they work.

    Should it be able to construct a successful seed AI, this core idea would be forced to be challenged.

    The seed AI would demonstrate a level of self-awareness and autonomy not readily explained by the Cartesian philosophical paradigm by upgrading its own programming in ways that surprise and defy the forecasts of its human programmers.

    Indeed, although it is still speculative (for the time being), the increasingly possible result of superintelligent AI poses a slew of moral and legal dilemmas that have sparked a lot of philosophical discussion in this subject.

    The main worries are about the human species' security in the case of what Bostrom refers to as a "intelligence explosion"—that is, the creation of a seed AI followed by a possibly exponential growth in intellect (Bostrom 2017).



    One of the key problems is the inherently unexpected character of such a result.


    Humans will not be able to totally foresee how superintelligent AI would act due to the autonomy entailed by superintelligence in a definitional sense.

    Even in the few cases of specialized superintelligence that humans have been able to construct and study so far—for example, robots that have surpassed humans in strategic games like chess and Go—human forecasts for AI have shown to be very unreliable.

    For many critics, such unpredictability is a significant indicator that, should more generic types of superintelligent AI emerge, humans would swiftly lose their capacity to manage them (Kissinger 2018).





    Of all, such a loss of control does not automatically imply an adversarial relationship between humans and superintelligence.


    Indeed, although most of the literature on superintelligence portrays this relationship as adversarial, some new work claims that this perspective reveals a prejudice against machines that is particularly prevalent in Western cultures (Knight 2014).

    Nonetheless, there are compelling grounds to believe that superintelligent AI would at the very least consider human goals as incompatible with their own, and may even regard humans as existential dangers.

    For example, computer scientist Steve Omohundro has claimed that even a relatively basic kind of superintelligent AI like a chess bot would have motive to want the extinction of humanity as a whole—and may be able to build the tools to do it (Omohundro 2014).

    Similarly, Bostrom has claimed that a superintelligence explosion would most certainly result in, if not the extinction of the human race, then at the very least a gloomy future (Bostrom 2017).

    Whatever the benefits of such theories, the great uncertainty entailed by superintelligence is obvious.

    If there is one point of agreement in this large and diverse literature, it is that if AI research is to continue, the global community must take great care to protect its interests.





    Hardened determinists who claim that technological advancement is so tightly connected to inflexible market forces that it is simply impossible to change its pace or direction in any major manner may find this statement contentious.


    According to this determinist viewpoint, if AI can deliver cost-cutting solutions for industry and commerce (as it has already started to do), its growth will proceed into the realm of superintelligence, regardless of any unexpected negative repercussions.

    Many skeptics argue that growing societal awareness of the potential risks of AI, as well as thorough political monitoring of its development, are necessary counterpoints to such viewpoints.


    Bostrom highlights various examples of effective worldwide cooperation in science and technology as crucial precedents that challenge the determinist approach, including CERN, the Human Genome Project, and the International Space Station (Bostrom 2017, 253).

    To this, one may add examples from the worldwide environmental movement, which began in the 1960s and 1970s and has imposed significant restrictions on pollution committed in the name of uncontrolled capitalism (Feenberg 2006).



    Given the speculative nature of superintelligence research, it is hard to predict what the future holds.

    However, if superintelligence poses an existential danger to human existence, caution would dictate that a worldwide collaborative strategy rather than a free market approach to AI be used.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 


    Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky, Eliezer.



    References & Further Reading:


    • Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
    • Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Questioning Technology, 45–73. New York: Routledge.
    • Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/.
    • Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through Good Design. Washington, DC: The Project on Civilian Robotics. Brookings Institution.
    • Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
    • Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to Face Charges in China.” The New York Times, January 21, 2019



    AI - Smart Homes And Smart Cities.

     



    Projects to develop the infrastructure for smart cities and houses are involving public authorities, professionals, businessmen, and residents all around the world.


    These smart cities and houses make use of information and communication technology (ICT) to enhance quality of life, local and regional economies, urban planning and transportation, and government.


    Urban informatics is a new area that gathers data, analyzes patterns and trends, and utilizes the information to implement new ICT in smart cities.

    Data may be gathered from a number of different sources.

    Surveillance cameras, smart cards, internet of things sensor networks, smart phones, RFID tags, and smart meters are just a few examples.

    In real time, any kind of data may be captured.

    Passenger occupancy and flow may be used to obtain data on mass transit utilization.

    Road sensors can count cars on the road or in parking lots.



    They may also use urban machine vision technologies to determine individual wait times for local government services.


    From public thoroughfares and sidewalks, license plate numbers and people's faces may be identified and documented.

    Tickets may be issued, and statistics on crime can be gathered.

    The information gathered in this manner may be compared to other big datasets on neighborhood income, racial and ethnic mix, utility reliability statistics, and air and water quality indices.



    Artificial intelligence (AI) may be used to build or improve city infrastructure.




    Stop signal frequencies at crossings are adjusted and optimized based on data acquired regarding traffic movements.


    This is known as intelligent traffic signaling, and it has been found to cut travel and wait times, as well as fuel consumption, significantly.

    Smart parking structures assist cars in quickly locating available parking spaces.


    Law enforcement is using license plate identification and face recognition technologies to locate suspects and witnesses at crime scenes.

    Shotspotter, a business that triangulates the position of gunshots using a sensor network placed in special streetlights, tracked and informed police agencies to over 75,000 bullets fired in 2018.

    Information on traffic and pedestrian deaths is also being mined via big data initiatives.

    Vision Zero is a global highway safety initiative that aspires to decrease road fatalities to zero.

    Data analysis using algorithms has resulted in road safety efforts as well as road redesign that has saved lives.



    Cities have also been able to respond more swiftly to severe weather occurrences because to ubiquitous sensor technology.


    In Seattle, for example, conventional radar data is combined with RainWatch, a network of rain gauges.

    Residents get warnings from the system, and maintenance staff are alerted to possible problem places.

    Transport interconnection enabling completely autonomous autos is one long-term aim for smart cities.

    At best, today's autonomous cars can monitor their surroundings to make judgments and avoid crashes with other vehicles and numerous road hazards.

    However, cars that connect with one another in several directions are likely to create fully autonomous driving systems.

    Collisions are not only averted, but also prevented in these systems.


    Smart cities are often mentioned in conjunction with smart economy initiatives and foreign investment development by planners.


    Data-driven entrepreneurial innovation, as well as productivity analyses and evaluation, might be indicators of sensible economic initiatives.

    Some smart towns want to emulate Silicon Valley's success.

    Neom, Saudi Arabia, is one such project.

    It is a proposed megacity city that is expected to cost half a trillion dollars to build.

    Artificial intelligence is seen as the new oil in the city's ambitions, despite sponsorship by Saudi Aramco, the state-owned petroleum giant.

    Everything will be controlled by interconnected computer equipment and future artificial intelligence decision-making, from home technology to transportation networks and electronic medical record distribution.


    One of Saudi Arabia's most significant cultural activities—monitoring the density and pace of pilgrims around the Kaaba in Mecca—has already been entrusted to AI vision technologies.

    The AI is intended to avert a disaster on the scale of the 2015 Mina Stampede, which claimed the lives of 2,000 pilgrims.

    The use of highly data-driven and targeted public services is another trademark of smart city programs.

    Information-driven agencies are frequently referred to as "smart" or "e-government" when they work together.


    Open data projects to encourage openness and shared engagement in local decision-making might be part of smart governance.


    Local governments will collaborate with contractors to develop smart utility networks for the provision of electricity, telecommunications, and the internet.

    Waste bins are linked to the global positioning system and cloud servers, alerting vehicles when garbage is ready for pickup, allowing for smart waste management and recycling initiatives in Barcelona.

    Lamp poles have been converted into community wi-fi hotspots or mesh networks in certain areas to provide pedestrians with dynamic lighting safety.

    Forest City in Malaysia, Eko Atlantic in Nigeria, Hope City in Ghana, Kigamboni New City in Tanzania, and Diamniadio Lake City in Senegal are among the high-tech centres proposed or under development.


    Artificial intelligence is predicted to be the brain of the smart city in the future.


    Artificial intelligence will personalize city experiences to match the demands of specific inhabitants or tourists.

    Through customized glasses or heads-up displays, augmented systems may give virtual signs or navigational information.

    Based on previous use and location data, intelligent smartphone agents are already capable of predicting user movements.


    Artificial intelligence technologies are used in smart homes in a similar way.


    Google Home and other smart hubs now integrate with over 5,000 different types of smart gadgets sold by 400 firms to create intelligent environments in people's homes.

    Amazon Echo is Google Home's main rival.

    These kinds of technologies can regulate heating, ventilation, and air conditioning, as well as lighting and security, as well as household products like smart pet feeders.

    In the early 2000s, game-changing developments in home robotics led to widespread consumer acceptance of iRobot's Roomba vacuum cleaner.

    Obsolescence, proprietary protocols, fragmented platforms and interoperability issues, and unequal technological standards have all plagued such systems in the past.


    Machine learning is being pushed forward by smart houses.


    Smart technology' analytical and predictive capabilities are generally regarded as the backbone of one of the most rapidly developing and disruptive commercial sectors: home automation.

    To function properly, the smarter connected home of the future needs collect fresh data on a regular basis in order to develop.

    Smart houses continually monitor the interior environment and use aggregated past data to establish settings and functionalities in buildings with smart components installed.

    Smart houses may one day anticipate their owners' requirements, such as automatically changing blinds as the sun and clouds move across the sky.

    A smart house may produce a cup of coffee at precisely the correct time, order Chinese takeout, or play music based on the resident's mood as detected by emotion detectors.


    Pervasive, sophisticated technologies are used in smart city and household AI systems.


    The benefits of smart cities are many.

    Smart cities pique people's curiosity because of its promise for increased efficiency and convenience.

    It's enticing to live in a city that anticipates and easily fulfills personal wants.

    Smart cities, however, are not without their detractors.

    Smart havens, if left uncontrolled, have the ability to cause major privacy invasion via continuous video recording and microphones.

    Google contractors might listen to recordings of exchanges with users of its famous Google Assistant artificial intelligence system, according to reports in 2019.


    The influence of smart cities and households on the environment is yet unknown.


    Biodiversity considerations are often ignored in smart city ideas.


    Critical habitat is routinely destroyed in order to create space for the new cities that tech entrepreneurs and government officials desire.

    Conventional fossil-fuel transportation methods continue to reign supreme in smart cities.

    The future viability of smart homes is likewise up in the air.

    A recent research in Finland found that improved metering and consumption monitoring did not successfully cut smart home power use.


    In reality, numerous smart cities that were built from the ground up are now almost completely empty.


    Many years after their initial construction, China's so-called ghost cities, such as Ordos Kangbashi, have attained occupancy levels of one-third of all housing units.

    Despite direct, automated vacuum waste collection tubes in individual apartments and building elevators timed to the arrival of residents' automobiles, Songdo, Korea, an early "city in a box," has not lived up to promises.


    Smart cities are often portrayed as impersonal, elitist, and costly, which is the polar opposite of what the creators intended.

    Songdo exemplifies the smart city trend in many aspects, with its underpinning structure of ubiquitous computing technologies that power everything from transportation systems to social networking channels.

    The unrivaled integration and synchronization of services is made possible by the coordination of all devices.

    As a result, by turning the city into an electronic panopticon or surveillance state for observing and controlling residents, the city simultaneously weakens the protective advantages of anonymity in public settings.


    Authorities studying smart city infrastructures are now fully aware of the computational biases of proactive and predictive policing.



    ~ Jai Krishna Ponnappan

    Find Jai on Twitter | LinkedIn | Instagram


    You may also want to read more about Artificial Intelligence here.



    See also: 

    Biometric Privacy and Security; Biometric Technology; Driverless Cars and Trucks; Intelligent Transportation; Smart Hotel Rooms.


    References & Further Reading:


    Albino, Vito, Umberto Berardi, and Rosa Maria Dangelico. 2015. “Smart Cities: Definitions, Dimensions, Performance, and Initiatives.” Journal of Urban Technology 22, no. 1: 3–21.

    Batty, Michael, et al. 2012. “Smart Cities of the Future.” European Physical Journal Special Topics 214, no. 1: 481–518.

    Friedman, Avi. 2018. Smart Homes and Communities. Mulgrave, Victoria, Australia: Images Publishing.

    Miller, Michael. 2015. The Internet of Things: How Smart TVs, Smart Cars, Smart Homes, and Smart Cities Are Changing the World. Indianapolis: Que.

    Shepard, Mark. 2011. Sentient City: Ubiquitous Computing, Architecture, and the Future of Urban Space. New York: Architectural League of New York.

    Townsend, Antony. 2013. Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia. New York: W. W. Norton & Company.





    What Is Artificial General Intelligence?

    Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...