Showing posts sorted by date for query social media. Sort by relevance Show all posts
Showing posts sorted by date for query social media. Sort by relevance Show all posts

Artificial Intelligence - What Are Mobile Recommendation Assistants?

 




Mobile Recommendation Assistants, also known as Virtual Assistants, Intelligent Agents, or Virtual Personal Assistants, are a collection of software features that combine a conversational user interface with artificial intelligence to act on behalf of a user.

They may deliver what seems to a user as an agent when they work together.

In this sense, an agent differs from a tool in that it has the ability to act autonomously and make choices with some degree of autonomy.

Many qualities may be included into the design of mobile suggestion helpers to improve the user's impression of agency.

Using visual avatars to represent technology, incorporating features of personality such as humor or informal/colloquial language, giving a voice and a legitimate name, constructing a consistent method of behaving, and so on are examples of such tactics.

A human user can use a mobile recommendation assistant to help them with a wide range of tasks, such as opening software applications, answering questions, performing tasks (operating other software/hardware), or engaging in conversational commerce or entertainment (telling stories, telling jokes, playing games, etc.).

Apple's Siri, Baidu's Xiaodu, Amazon's Alexa, Microsoft's Cortana, Google's Google Assistant, and Xiaomi's Xiao AI are among the mobile voice assistants now in development, each designed for certain companies, use cases, and user experiences.

A range of user interface modali ties are used by mobile recommendation aides.

Some are completely text-based, and they are referred regarded as chatbots.

Business to consumer (B2C) communication is the most common use case for a chatbot, and notable applications include online retail communication, insurance, banking, transportation, and restaurants.

Chatbots are increasingly being employed in medical and psychological applications, such as assisting users with behavior modification.

Similar apps are becoming more popular in educational settings to help students with language learning, studying, and exam preparation.

Facebook Messenger is a prominent example of a chatbot on social media.

While not all mobile recommendation assistants need voice-enabled interaction as an input modality (some, such web site chatbots, may depend entirely on text input), many contemporary examples do.

A Mobile Recommendation Assistant uses a number similar predecessor technologies, including a voice-enabled user interface.

Early voice-enabled user interfaces were made feasible by a command syntax that was hand-coded as a collection of rules or heuristics in advance.

These rule-based systems allowed users to operate devices without using their hands by delivering voice directions.

IBM produced the first voice recognition program, which was exhibited during the 1962 World's Fair in Seattle.

The IBM Shoebox has a limited vocabulary of sixteen words and nine numbers.

By the 1990s, IBM and Microsoft's personal computers and software had basic speech recognition; Apple's Siri, which debuted on the iPhone 4s in 2011, was the first mobile application of a mobile assistant.

These early voice recognition systems were disadvantaged in comparison to conversational mobile agents in terms of user experience since they required a user to learn and adhere to a preset command language.

The consequence of rule-based voice interaction might seem mechanical when it comes to contributing to real humanlike conversation with computers, which is a feature of current mobile recommendation assistants.

Instead, natural language processing (NLP) uses machine learning and statistical inference to learn rules from enormous amounts of linguistic data (corpora).

Decision trees and statistical modeling are used in natural language processing machine learning to understand requests made in a variety of ways that are typical of how people regularly communicate with one another.

Advanced agents may have the capacity to infer a user's purpose in light of explicit preferences expressed via settings or other inputs, such as calendar entries.

Google's Voice Assistant uses a mix of probabilistic reasoning and natural language processing to construct a natural-sounding dialogue, which includes conversational components such as paralanguage ("uh", "uh-huh", "ummm").

To convey knowledge and attention, modern digital assistants use multimodal communication.

Paralanguage refers to communication components that don't have semantic content but are nonetheless important for conveying meaning in context.

These may be used to show purpose, collaboration in a dialogue, or emotion.

The aspects of paralanguage utilized in Google's voice assistant employing Duplex technology are termed vocal segre gates or speech disfluencies; they are intended to not only make the assistant appear more human, but also to help the dialogue "flow" by filling gaps or making the listener feel heard.

Another key aspect of engagement is kinesics, which makes an assistant feel more like an engaged conversation partner.

Kinesics is the use of gestures, movements, facial expressions, and emotion to aid in the flow of communication.

The car firm NIO's virtual robot helper, Nome, is one recent example of the application of face expression.

Nome is a digital voice assistant that sits above the central dashboard of NIO's ES8 in a spherical shell with an LCD screen.

It can swivel its "head" automatically to attend to various speakers and display emotions using facial expressions.

Another example is Dr. Cynthia Breazeal's commercial Jibo home robot from MIT, which uses anthropomorphism using paralinguistic approaches.

Motion graphics or lighting animations are used to communicate states of communication such as listening, thinking, speaking, or waiting in less anthropomorphic uses of kinesics, such as the graphical user interface elements on Apple's Siri or illumination arrays on Amazon Alexa's physical interface Echo or in Xiami's Xiao AI.

The rising intelligence and anthropomorphism (or, in some circumstances, zoomorphism or mechanomorphism) that comes with it might pose some ethical issues about user experience.

The need for more anthropomorphic systems derives from the positive user experience of humanlike agentic systems whose communicative behaviors are more closely aligned with familiar interactions like conversation, which are made feasible by natural language and paralinguistics.

Natural conversation systems have the fundamental advantage of not requiring the user to learn new syntax or semantics in order to successfully convey orders and wants.

These more humanistic human machine interfaces may employ a user's familiar mental model of communication, which they gained through interacting with other people.

Transparency and security become difficulties when a user's judgments about a machine's behavior are influenced by human-to-human communication as machine systems become closer to human-to-human contact.

The establishment of comfort and rapport may obscure the differences between virtual assistant cognition and assumed motivation.

Many systems may be outfitted with motion sensors, proximity sensors, cameras, tiny phones, and other devices that resemble, replicate, or even surpass human capabilities in terms of cognition (the assistant's intellect and perceptive capacity).

While these can be used to facilitate some humanlike interaction by improving perception of the environment, they can also be used to record, document, analyze, and share information that is opaque to a user when their mental model and the machine's interface do not communicate the machine's operation at a functional level.

After a user interaction, a digital assistant visual avatar may shut his eyes or vanish, but there is no need to associate such behavior with the microphone's and camera's capabilities to continue recording.

As digital assistants become more incorporated into human users' daily lives, data privacy issues are becoming more prominent.

Transparency becomes a significant problem to solve when specifications, manufacturer data collecting aims, and machine actions are potentially mismatched with user expectations.

Finally, when it comes to data storage, personal information, and sharing methods, security becomes a concern, as hacking, disinformation, and other types of abuse threaten to undermine faith in technology systems and organizations.


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Artificial Intelligence here.



See also: 


Chatbots and Loebner Prize; Mobile Recommendation Assistants; Natural Language Processing and Speech Understanding.


References & Further Reading:


Lee, Gary G., Hong Kook Kim, Minwoo Jeong, and Ji-Hwan Kim, eds. 2015. Natural Language Dialog Systems and Intelligent Assistants. Berlin: Springer.

Leviathan, Yaniv, and Yossi Matias. 2018. “Google Duplex: An AI System for Accomplishing Real-world Tasks Over the Phone.” Google AI Blog. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html.

Viken, Alexander. 2009. “The History of Personal Digital Assistants, 1980–2000.” Agile Mobility, April 10, 2009.

Waddell, Kaveh. 2016. “The Privacy Problem with Digital Assistants.” The Atlantic, May 24, 2016. https://www.theatlantic.com/technology/archive/2016/05/the-privacy-problem-with-digital-assistants/483950/.

Biased Data Isn't the Only Source of AI Bias.

 





In order to eliminate prejudice in artificial intelligence, it will be necessary to address both human and systemic biases. 


Bias in AI systems is often seen as a technological issue, but the NIST study recognizes that human prejudices, as well as systemic, institutional biases, have a role. 

Researchers at the National Institute of Standards and Technology (NIST) recommend broadening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed — as a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems. 

The advice is at the heart of a new NIST article, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which incorporates feedback from the public on a draft version issued last summer. 


The publication provides guidelines related to the AI Risk Management Framework that NIST is creating as part of a wider effort to facilitate the development of trustworthy and responsible AI. 


The key difference between the draft and final versions of the article, according to NIST's Reva Schwartz, is the increased focus on how bias presents itself not just in AI algorithms and the data used to train them, but also in the sociocultural environment in which AI systems are employed. 

"Context is crucial," said Schwartz, one of the report's authors and the primary investigator for AI bias. 

"AI systems don't work in a vacuum. They assist individuals in making choices that have a direct impact on the lives of others. If we want to design trustworthy AI systems, we must take into account all of the elements that might undermine public confidence in AI. Many of these variables extend beyond the technology itself to its consequences, as shown by the responses we got from a diverse group of individuals and organizations." 

NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a driver of American innovation across industries and sectors. 

NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 


AI bias is harmful to humans. 


AI may make choices on whether or not a student is admitted to a school, approved for a bank loan, or accepted as a rental applicant. 

Machine learning software, for example, might be taught on a dataset that underrepresents a certain gender or ethnic group. 

While these computational and statistical causes of bias remain relevant, the new NIST article emphasizes that they do not capture the whole story. 

Human and structural prejudices, which play a large role in the new edition, must be taken into consideration for a more thorough understanding of bias. 

Institutions that operate in ways that disfavor specific social groups, such as discriminating against persons based on race, are examples of systemic biases. 

Human biases may be related to how individuals utilize data to fill in gaps, such as a person's neighborhood impacting how likely police would consider them to be a criminal suspect. 

When human, institutional, and computational biases come together, they may create a dangerous cocktail – particularly when there is no specific direction for dealing with the hazards of deploying AI systems. 

"If we are to construct trustworthy AI systems, we must take into account all of the elements that might erode public faith in AI." 

Many of these considerations extend beyond the technology itself to the technology's consequences." —Reva Schwartz, AI bias main investigator To address these concerns, the NIST authors propose a "socio-technical" approach to AI bias mitigation. 


This approach recognizes that AI acts in a wider social context — and that attempts to overcome the issue of bias just on a technological level would fall short. 


"When it comes to AI bias concerns, organizations sometimes gravitate to highly technical solutions," Schwartz added. 

"However, these techniques fall short of capturing the social effect of AI systems. The growth of artificial intelligence into many facets of public life necessitates broadening our perspective to include AI as part of the wider social system in which it functions." 

According to Schwartz, socio-technical approaches to AI are a developing field, and creating measuring tools that take these elements into account would need a diverse mix of disciplines and stakeholders. 

"It's critical to bring in specialists from a variety of sectors, not just engineering," she added, "and to listen to other organizations and communities about the implications of AI." 

Over the next several months, NIST will host a series of public workshops aimed at creating a technical study on AI bias and integrating it to the AI Risk Management Framework.


Visit the AI RMF workshop website for further information and to register.



A Method for Reducing Artificial Intelligence Bias Risk. 


The National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing biases in artificial intelligence (AI) — and is asking for the public's help in improving it — in an effort to combat the often pernicious effect of biases in AI that can harm people's lives and public trust in AI. 


A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Document 1270), a new publication from NIST, lays out the methodology. 


It's part of the agency's larger effort to encourage the development of trustworthy and responsible AI. 


NIST will welcome public comments on the paper through September 10, 2021 (an extension of the initial deadline of August 5, 2021), and the writers will utilize the feedback to help define the topic of numerous collaborative virtual events NIST will organize in the following months. 


This series of events aims to engage the stakeholder community and provide them the opportunity to contribute feedback and ideas on how to reduce the danger of bias in AI. 


"Managing the danger of bias in AI is an important aspect of establishing trustworthy AI systems, but the route to accomplishing this remains uncertain," said Reva Schwartz of the National Institute of Standards and Technology, who was one of the report's authors. 

"We intend to include the community in the development of voluntary, consensus-based norms for limiting AI bias and decreasing the likelihood of negative consequences." 


NIST contributes to the research, standards, and data needed to fulfill artificial intelligence's (AI) full potential as a catalyst for American innovation across industries and sectors. 


NIST is working with the AI community to define the technological prerequisites for cultivating confidence in AI systems that are accurate and dependable, safe and secure, explainable, and bias-free. 

Bias in AI-based goods and systems is a critical, but yet poorly defined, component of trustworthiness. 

This prejudice might be intentional or unintentional. 


NIST is working to get us closer to consensus on recognizing and quantifying bias in AI systems by organizing conversations and conducting research. 


Because AI can typically make sense of information faster and more reliably than humans, it has become a transformational technology. 

Everything from medical detection to digital assistants on our cellphones now uses AI. 

However, as AI's uses have developed, we've seen that its conclusions may be skewed by biases in the data it's given - data that either partially or erroneously represents the actual world. 

Furthermore, some AI systems are designed to simulate complicated notions that cannot be readily assessed or recorded by data, such as "criminality" or "employment appropriateness." 

Other criteria, such as where you live or how much education you have, are used as proxies for the notions these systems are attempting to mimic. 


The imperfect association of the proxy data with the original notion might result to undesirable or discriminatory AI outputs, such as wrongful arrests, or eligible candidates being erroneously refused for employment or loans. 


The strategy the authors suggest for controlling bias comprises a conscious effort to detect and manage bias at multiple phases in an AI system’s lifespan, from early idea through design to release. 

The purpose is to bring together stakeholders from a variety of backgrounds, both within and outside the technology industry, in order to hear viewpoints that haven't been heard before. 

“We want to bring together the community of AI developers of course, but we also want to incorporate psychologists, sociologists, legal experts and individuals from disadvantaged communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. 

"We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." 


Preliminary research for the NIST writers includes a study of peer-reviewed publications, books, and popular news media, as well as industry reports and presentations. 


It was discovered that bias may seep into AI systems at any level of development, frequently in different ways depending on the AI's goal and the social environment in which it is used. 

"An AI tool is often built for one goal, but it is subsequently utilized in a variety of scenarios," Schwartz said. 

"Many AI applications have also been inadequately evaluated, if at all, in the environment for which they were designed. All these elements might cause bias to go undetected.” 

Because the team members acknowledge that they do not have all of the answers, Schwartz believes it is critical to get public comment, particularly from those who are not often involved in technical conversations. 


"We'd want to hear from individuals who are affected by AI, both those who design AI systems and those who aren't." ~ Elham Tabassi.


"We know bias exists throughout the AI lifespan," added Schwartz. 

"It would be risky to not know where your model is biased or to assume that there is none. The next stage is to figure out how to see it and deal with it."


Comments on the proposed method may be provided by downloading and completing the template form (in Excel format) and emailing it to ai-bias@list.nist.gov by Sept. 10, 2021 (extended from the initial deadline of Aug. 5, 2021). 

This website will be updated with further information on the joint event series.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Artificial Intelligence here.




Cyber Security - Location and Context-Awareness.


 


Context-Aware IMS Solutions came about with the development in the number of mobile devices and mobile paradigms. It became a necessity for IMSs to consider the location of users [53]. 


1. Location-Based Services.


Location-Based Services (LBSs) are systems that deliver information based on the location of people or devices [54]. 

Some LBSs go a step further in delivering valuable services by using the users' location to infer additional information about the area. 

To get there, existing IMSs track users' whereabouts so that they may be taken into account during management operations. 

An LBS may be either person-oriented or device-oriented, depending on the emphasis of services. 


2. Scenarios for Context-Aware Applications. 

This section depicts many situations in which the PBM paradigm aids IMSs in the processing and protection of information, as well as the configuration and behavior management of systems. 


3. Proposals for Context-Awareness

Many context-aware services have been suggested in recent years in attempt to make life simpler. Despite the fact that the word "context" was coined in 1994, the first context-aware solution in the literature was offered in 1991 [69]. 






~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Cyber Security Systems here.





References & Further Reading:



1. OSI. Information Processing Systems-Open System Inteconnection-Systems Management Overview. ISO 10040, 1991.

2. Jefatura del Estado. Ley Orgánica de Protección de Datos de Carácter Personal. www.boe.es/boe/dias/1999/12/14/pdfs/A43088-43099.pdf.

3. D. W. Samuel, and D. B. Louis. The right to privacy. Harvard Law Review, 4(5): 193–220, 1890.

4. A. Westerinen, J. Schnizlein, J. Strassner, M. Scherling, B. Quinn, S. Herzog, A. Huynh, M. Carlson, J. Perry, and S. Waldbusser. Terminology for Policy-Based Management. IETF Request for Comments 3198, November 2001.

5. B. Moore. Policy Core Information Model (PCIM) Extensions. IETF Request for Comments 3460, January 2003.

6. S. Godik, and T. Moses. OASIS EXtensible Access Control Markup Language (XACML). OASIS Committee Specification, 2002.

7. A. Dardenne, A. Van Lamsweerde and S. Fickas. Goal-directed requirements acquisition. Science of Computer Programming, 20(1–2): 3–50, 1993.

8. F. L. Gandon, and N. M. Sadeh. Semantic web technologies to reconcile privacy and context awareness. Web Semantics: Science, Services and Agents on the World Wide Web, 1(3): 241–260, April 2004.

9. I. Horrocks. Ontologies and the semantic web. Communications ACM, 51(12): 58–67, December 2008.

10. R. Boutaba and I. Aib. Policy-based management: A historical perspective. Journal of Network and Systems Management, 15(4): 447–480, 2007.

11. P. A. Carter. Policy-Based Management, In Pro SQL Server Administration, pages 859–886. Apress, Berkeley, CA, 2015.

12. D. Florencio, and C. Herley. Where do security policies come from? In Proceedings of the 6th Symposium on Usable Privacy and Security, pages 10:1–10:14, 2010.

13. K. Yang, and X. Jia. DAC-MACS: Effective data access control for multi-authority Cloud storage systems, IEEE Transactions on Information Forensics and Security, 8(11): 1790–1801, 2014.

14. B. W. Lampson. Dynamic protection structures. In Proceedings of the Fall Joint Computer Conference, pages 27–38, 1969.

15. B. W. Lampson. Protection. ACM SIGOPS Operating Systems Review, 8(1): 18–24, January 1974.

16. D. E. Bell and L. J. LaPadula. Secure Computer Systems: Mathematical Foundations. Technical report, DTIC Document, 1973.

17. D. F. Ferraiolo, and D. R. Kuhn. Role-based access controls. In Proceedings of the 15th NIST-NCSC National Computer Security Conference, pages 554–563, 1992.

18. V. P. Astakhov. Surface integrity: Definition and importance in functional performance, In Surface Integrity in Machining, pages 1–35. Springer, London, 2010.

19. K. J. Biba. Integrity Considerations for Secure Computer Systems. Technical report, DTIC Document, 1977.

20. M. J. Culnan, and P. K. Armstrong. Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10(1): 104–115, 1999.

21. A. I. Antón, E. Bertino, N. Li, and T. Yu. A roadmap for comprehensive online privacy policy management. Communications ACM, 50(7): 109–116, July 2007.

22. J. Karat, C. M. Karat, C. Brodie, and J. Feng. Privacy in information technology: Designing to enable privacy policy management in organizations. International Journal of Human Computer Studies, 63(1–2): 153–174, 2005.

23. M. Jafari, R. Safavi-Naini, P. W. L. Fong, and K. Barker. A framework for expressing and enforcing purpose-based privacy policies. ACM Transaction Information Systesms Security, 17(1): 3:1–3:31, August 2014.

24. G. Karjoth, M. Schunter, and M. Waidner. Platform for enterprise privacy practices: Privacy-enabled management of customer data, In Proceedings of the International Workshop on Privacy Enhancing Technologies, pages 69–84, 2003.

25. S. R. Blenner, M. Kollmer, A. J. Rouse, N. Daneshvar, C. Williams, and L. B. Andrews. Privacy policies of android diabetes apps and sharing of health information. JAMA, 315(10): 1051–1052, 2016.

26. R. Ramanath, F. Liu, N. Sadeh, and N. A. Smith. Unsupervised alignment of privacy policies using hidden Markov models. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 605–610, June 2014.

27. J. Gerlach, T. Widjaja, and P. Buxmann. Handle with care: How online social network providers’ privacy policies impact users’ information sharing behavior. The Journal of Strategic Information Systems, 24(1): 33–43, 2015.

28. O. Badve, B. B. Gupta, and S. Gupta. Reviewing the Security Features in Contemporary Security Policies and Models for Multiple Platforms. In Handbook of Research on Modern Cryptographic Solutions for Computer and Cyber Security, pages 479–504. IGI Global, Hershey, PA, 2016.

29. K. Zkik, G. Orhanou, and S. El Hajji. Secure mobile multi cloud architecture for authentication and data storage. International Journal of Cloud Applications and Computing 7(2): 62–76, 2017.

30. C. Stergiou, K. E. Psannis, B. Kim, and B. Gupta. Secure integration of IoT and cloud computing. In Future Generation Computer Systems, 78(3): 964–975, 2018.

31. D. C. Verma. Simplifying network administration using policy-based management. IEEE Network, 16(2): 20–26, March 2002.

32. D. C. Verma. Policy-Based Networking: Architecture and Algorithms. New Riders Publishing, Thousand Oaks, CA, 2000.

33. J. Rubio-Loyola, J. Serrat, M. Charalambides, P. Flegkas, and G. Pavlou. A methodological approach toward the refinement problem in policy-based management systems. IEEE Communications Magazine, 44(10): 60–68, October 2006.

34. F. Perich. Policy-based network management for next generation spectrum access control. In Proceedings of International Symposium on New Frontiers in Dynamic Spectrum Access Networks, pages 496–506, April 2007.

35. S. Shin, P. A. Porras, V. Yegneswaran, M. W. Fong, G. Gu, and M. Tyson. FRESCO: Modular composable security services for Software-Defined Networks. In Proceedings of the 20th Annual Network and Distributed System Security Symposium, pages 1–16, 2013.

36. K. Odagiri, S. Shimizu, N. Ishii, and M. Takizawa. Functional experiment of virtual policy based network management scheme in Cloud environment. In International Conference on Network-Based Information Systems, pages 208–214, September 2014.

37. M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker. Ethane: Taking control of the enterprise. In Proceedings of Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, pages 1–12, August 2007.

38. M. Wichtlhuber, R. Reinecke, and D. Hausheer. An SDN-based CDN/ISP collaboration architecture for managing high-volume flows. IEEE Transactions on Network and Service Management, 12(1): 48–60, March 2015.

39. A. Lara, and B. Ramamurthy. OpenSec: Policy-based security using Software-Defined Networking. IEEE Transactions on Network and Service Management, 13(1): 30–42, March 2016.

40. W. Jingjin, Z. Yujing, M. Zukerman, and E. K. N. Yung. Energy-efficient base stations sleep-mode techniques in green cellular networks: A survey. IEEE Communications Surveys Tutorials, 17(2): 803–826, 2015.

41. G. Auer, V. Giannini, C. Desset, I. Godor, P. Skillermark, M. Olsson, M. A. Imran, D. Sabella, M. J. Gonzalez, O. Blume, and A. Fehske. How much energy is needed to run a wireless network?IEEE Wireless Communications, 18(5): 40–49, 2011.

42. W. Yun, J. Staudinger, and M. Miller. High efficiency linear GaAs MMIC amplifier for wireless base station and Femto cell applications. In IEEE Topical Conference on Power Amplifiers for Wireless and Radio Applications, pages 49–52, January 2012.

43. M. A. Marsan, L. Chiaraviglio, D. Ciullo, and M. Meo. Optimal energy savings in cellular access networks. In IEEE International Conference on Communications Workshops, pages 1–5, June 2009.

44. H. Claussen, I. Ashraf, and L. T. W. Ho. Dynamic idle mode procedures for femtocells. Bell Labs Technical Journal, 15(2): 95–116, 2010.

45. L. Rongpeng, Z. Zhifeng, C. Xianfu, J. Palicot, and Z. Honggang. TACT: A transfer actor-critic

learning framework for energy saving in cellular radio access networks. IEEE Transactions on Wireless Communications, 13(4): 2000–2011, 2014.

46. G. C. Januario, C. H. A. Costa, M. C. Amarai, A. C. Riekstin, T. C. M. B. Carvalho, and C. Meirosu. Evaluation of a policy-based network management system for energy-efficiency. In IFIP/IEEE International Symposium on Integrated Network Management, pages 596–602, May 2013.

47. C. Dsouza, G. J. Ahn, and M. Taguinod. Policy-driven security management for fog computing: Preliminary framework and a case study. In Conference on Information Reuse and Integration, pages 16–23, August 2014.

48. H. Kim and N. Feamster. Improving network management with Software Defined Networking. IEEE Communications Magazine, 51(2): 114–119, February 2013.

49. O. Gaddour, A. Koubaa, and M. Abid. Quality-of-service aware routing for static and mobile IPv6-based low-power and loss sensor networks using RPL. Ad Hoc Networks, 33: 233–256, 2015.

50. Q. Zhao, D. Grace, and T. Clarke. Transfer learning and cooperation management: Balancing the quality of service and information exchange overhead in cognitive radio networks. Transactions on Emerging Telecommunications Technologies, 26(2): 290–301, 2015.

51. M. Charalambides, P. Flegkas, G. Pavlou, A. K. Bandara, E. C. Lupu, A. Russo, N. Dulav, M. Sloman, and J. Rubio-Loyola. Policy conflict analysis for quality of service management. In Proceedings of the 6th IEEE International Workshop on Policies for Distributed Systems and Networks, pages 99–108, June 2005.

52. M. F. Bari, S. R. Chowdhury, R. Ahmed, and R. Boutaba. PolicyCop: An autonomic QoS policy enforcement framework for software defined networks. In 2013 IEEE SDN for Future Networks and Services, pages 1–7, November 2013.

53. C. Bennewith and R. Wickers. The mobile paradigm for content development, In Multimedia and E-Content Trends, pages 101–109. Vieweg+Teubner Verlag, 2009.

54. I. A. Junglas, and R. T. Watson. Location-based services. Communications ACM, 51(3): 65–69, March 2008.

55. M. Weiser. The computer for the 21st century. Scientific American, 265(3): 94–104, 1991.

56. G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles. Towards a better understanding of context and context-awareness. In Handheld and Ubiquitous Computing, pages 304–307, September 1999.

57. B. Schilit, N. Adams, and R. Want. Context-aware computing applications. In Proceeding of the 1st Workshop Mobile Computing Systems and Applications, pages 85–90, December 1994.

58. N. Ryan, J. Pascoe, and D. Morse. Enhanced reality fieldwork: The context aware archaeological assistant. In Proceedings of the 25th Anniversary Computer Applications in Archaeology, pages 85–90, December 1997.

59. A. K. Dey. Context-aware computing: The CyberDesk project. In Proceedings of the AAAI 1998 Spring Symposium on Intelligent Environments, pages 51–54, 1998.

60. P. Prekop and M. Burnett. Activities, context and ubiquitous computing. Computer Communications, 26(11): 1168–1176, July 2003.

61. R. M. Gustavsen. Condor-an application framework for mobility-based context-aware applications. In Proceedings of the Workshop on Concepts and Models for Ubiquitous Computing, volume 39, September 2002.

62. C. Tadj and G. Ngantchaha. Context handling in a pervasive computing system framework. In 

Proceedings of the 3rd International Conference on Mobile Technology, Applications and Systems, 

pages 1–6, October 2006.

63. S. Dhar and U. Varshney. Challenges and business models for mobile location-based services and advertising. Communications ACM, 54(5): 121–128, May 2011.

64. F. Ricci, L. Rokach, and B. Shapira. Recommender Systems: Introduction and Challenges, pages In Recommender Systems Handbook, pages 1–34. Springer, Boston, MA, 2015.

65. J. B. Schafer, D. Frankowski, J. Herlocker, and S. Sen. Collaborative Filtering Recommender Systems, In The Adaptive Web, pages 291–324. Springer, Berlin, Heidelberg, 2007.

66. P. Lops, M. de Gemmis, and G. Semeraro. Content-Based Recommender Systems: State of the Art and Trends, In Recommender Systems Handbook, pages 73–105. Springer, Boston, MA, 2011.

67. D. Slamanig and C. Stingl. Privacy aspects of eHealth. In Proceedings of Conference on Availability, Reliability and Security, pages 1226–1233, March 2008.

68. C. Wang. Policy-based network management. In Proceedings of the International Conference on Communication Technology, volume 1, pages 101–105, 2000.

69. R. Want, A. Hopper, V. Falcao, and J. Gibbons. The active badge location system. ACM Transactions on Information Systems, 10(1): 91–102, January 1992.

70. K. R. Wood, T. Richardson, F. Bennett, A. Harter, and A. Hopper. Global teleporting with Java: Toward ubiquitous personalized computing. Computer, 30(2): 53–59, February 1997.

71. C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos. Context aware computing for the Internet of Things: A survey. IEEE Communications Surveys Tutorials, 16(1): 414–454, 2014.

72. B. Guo, L. Sun, and D. Zhang. The architecture design of a cross-domain context management system. In Proceedings of Conference Pervasive Computing and Communications Workshops, pages 499–504, April 2010.

73. A. Badii, M. Crouch, and C. Lallah. A context-awareness framework for intelligent networked embedded systems. In Proceedings of Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies and Services, pages 105–110, August 2010.

74. S. Pietschmann, A. Mitschick, R. Winkler, and K. Meissner. CroCo: Ontology-based, crossapplication context management. In Proceedings of Workshop on Semantic Media Adaptation and Personalization, pages 88–93, December 2008.

75. T. Gu, X. H. Wang, H. K. Pung, and D. Q. Zhang. An ontology-based context model in intelligent environments. In Proceedings of Communication Networks and Distributed Systems Modeling and Simulation Conference, pages 270–275, January 2004.

76. H. Chen, T. Finin, and A. Joshi. An ontology for context-aware pervasive computing environments. The Knowledge Engineering Review, 18(03): 197–207, September 2003.

77. D. Ejigu, M. Scuturici, and L. Brunie. CoCA: A collaborative context-aware service platform for pervasive computing. In Proceedings of Conference Information Technologies, pages 297–302, April 2007.

78. R. Yus, E. Mena, S. Ilarri, and A. Illarramendi. SHERLOCK: Semantic management of location based services in wireless environments. Pervasive and Mobile Computing, 15: 87–99, 2014.

79. L. Tang, Z. Yu, H. Wang, X. Zhou, and Z. Duan. Methodology and tools for pervasive application development. International Journal of Distributed Sensor Networks, 10(4): 1–16, 2014.

80. B. Bertran, J. Bruneau, D. Cassou, N. Loriant, E. Balland, and C. Consel. DiaSuite: A tool suite to develop sense/compute/control applications. Science of Computer Programming, 79: 39–51, 2014.

81. P. Jagtap, A. Joshi, T. Finin, and L. Zavala. Preserving privacy in context-aware systems. In Proceedings of Conference on Semantic Computing, pages 149–153, September 2011.

82. V. Sacramento, M. Endler, and F. N. Nascimento. A privacy service for context-aware mobile computing. In Proceedings of Conference on Security and Privacy for Emergency Areas in Communication Networks, pages 182–193, September 2005.

83. A. Huertas Celdrán, F. J. García Clemente, M. Gil Pérez, and G. Martínez Pérez. SeCoMan: A 

semantic-aware policy framework for developing privacy-preserving and context-aware smart applications. IEEE Systems Journal, 10(3): 1111–1124, September 2016.

84. J. Qu, G. Zhang, and Z. Fang. Prophet: A context-aware location privacy-preserving scheme in location sharing service. Discrete Dynamics in Nature and Society, 2017, 1–11, Article ID 6814832, 2017.

85. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. PRECISE: Privacy-aware recommender based on context information for Cloud service environments. IEEE Communications Magazine, 52(8): 90–96, August 2014.

86. S. Chitkara, N. Gothoskar, S. Harish, J.I. Hong, and Y. Agarwal. Does this app really need my location? Context-aware privacy management for smartphones. In Proceedings of the ACM Interactive Mobile, Wearable and Ubiquitous Technologies, 1(3): 42:1–42:22, September 2017.

87. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. What private information are you disclosing? A privacy-preserving system supervised by yourself. In Proceedings of the 6th International Symposium on Cyberspace Safety and Security, pages 1221–1228, August 2014.

88. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. MASTERY: A multicontext-aware system that preserves the users’ privacy. In IEEE/IFIP Network Operations and Management Symposium, pages 523–528, April 2016.

89. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Preserving patients’ privacy in health scenarios through a multicontext-aware system. Annals of Telecommunications, 72(9–10): 577–587, October 2017.

90. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Policy-based management for green mobile networks through software-defined networking. Mobile Networks and Applications, In Press, 2016.

91. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Enabling highly dynamic mobile scenarios with software defined networking. IEEE Communications Magazine, Feature Topics Issue on SDN Use Cases for Service Provider Networks, 55(4): 108–113, April 2017. 






What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...