Showing posts with label Cyber Security. Show all posts
Showing posts with label Cyber Security. Show all posts

Cyber Security - MALWARE'S MALICIOUS ACTIVITIES.

 


Malware effectively codifies the harmful behaviors that an attacker intends to carry out. 



The Cyber Kill Chain Model may be used to analyze cyberattacks, as illustrated in Table. 

It represents (iterations of) stages that are generally involved in a cyberattack. 

Reconnaissance is the initial phase, in which an attacker locates or attracts possible targets. 

This may be done by searching the Internet for susceptible machines (computers that execute network services like sendmail and have known vulnerabilities) or sending phishing emails to a group of users. 

The next step is to acquire access to the targets, for example, by providing crafted input to trigger a vulnerability in the susceptible network service software, such as a buffer overflow, or by embedding malware in a web page to compromise a user's browser and take control of his machine. 



This relates to the Cyber Kill Chain Model's Weaponization and Delivery (of exploits) stages. 


Once the victim has been hacked, another piece of malware is often downloaded and installed; this corresponds to the Cyber Kill Chain Model's Installation (of malware) stage. 


This malware is the attacker's main workhorse and can perform a variety of tasks, including: 


confidentiality – it can steal valuable data, such as user authentication information and financial and health information; 

integrity – it can inject false information (e.g., send spam and phish emails, create fraudulent clicks, etc.) or modify data; and 

availability – it can send traffic as part of a distributed denial-of-service (DDoS) attack. 


Because there are toolkits (e.g., a key-logger) freely available for carrying out many 'standard' activities (e.g., recording user passwords), and malware can be dynamically updated to include or activate new activities and take part in a longer or larger 'campaign' rather than just performing isolated, one-off actions, most modern malware performs a combination of these attack actions. 


In the Cyber Kill Chain Model, these are the Actions on Objectives. 



Botnets are a kind of malware that has been operating for a long time and is well-coordinated. 


A botnet is an attacker-controlled network of bots (or hacked computers). 


Each bot is infected with botnet malware, which connects with the botnet command-and-control (C&C) server on a regular basis to receive instructions on particular destructive operations or malware upgrades. 

For example, a spamming botnet's C&C server sends each bot a spam template and a list of email addresses every day, resulting in the botnet sending a significant number of spam messages. 

If the botnet is disrupted as a result of detection and response activities, such as the current C&C server being taken down, the botnet malware is already designed to contact an alternate server and may receive updates to switch to a peer-to-peer botnet. 

Because there are numerous bots in various networks, botnets are often fairly loud, i.e., reasonably simple to identify. 


Botnet C&C is an example of the Cyber Kill Chain Model's Command & Control stage. 


Unlike botnets, malware used by so-called advanced persistent threats (APTs) usually targets a single organization rather than attempting large-scale assaults. 

It may, for example, seek for a certain kind of controller in the organization to infect and cause it to deliver incorrect control signals, resulting in machine failures. 

APT malware is usually designed to last a long time (thus the label "persistent"). 

This means it not only gets frequent updates, but it also avoids discovery by reducing the volume and intensity of its activity (i.e., 'low and sluggish'), moving across the organization (i.e., 'lateral motions,' and hiding its traces. 

Instead of sending all of the stolen data to a 'drop site' at once, it can send a small piece at a time and only when the server is already sending legitimate traffic; once it has finished stealing from one server, it moves to another (e.g., by exploiting trust relationships between the two) and removes logs and even patches the vulnerabilities in the first server. 

When analyzing a cyberattack using the Cyber Kill Chain Model, we must look at each step's activities. 

This necessitates familiarity with the assault strategies involved. 

The ATT&CK Knowledge Base is a significant resource for analysts since it chronicles the most up-to-date assault strategies and procedures based on real-world observations. 


The Eco-System Below Ground.


Malware assaults in the early days were mostly annoyance attacks (such as defacing or spraying graffiti on a company's website). 


Malware assaults have evolved into full-fledged cyberwars (e.g., attacks on vital facilities) and sophisticated crimes in recent years (e.g.,ransomware, fake-AntiVirus tools, etc.). 

There has also evolved an underground eco-system to support the whole malware lifecycle, which includes creation, deployment, operations, and monetization. 

There are individuals in this eco-system who specialize in certain aspects of the malware lifecycle, and by giving their services to others, they partake in the (money) benefits and rewards. 


The quality of malware increases as a result of this specialization. 


For example, an attacker may employ the top exploit researcher to create the section of the malware that compromises a susceptible machine remotely. 


Specialization may also help to give believable denial or, at the very least, reduce culpability. 


For example, a spammer merely 'rents' a botnet to transmit spam and is not responsible for infecting machines and converting them into bots; similarly, an exploit 'researcher' is simply experimenting and is not responsible for building the botnet as long as the malware was not released by him. 

That is, although they are all responsible for malware-related harm, they individually share just a fraction of the overall burden.





~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Cyber Security Systems here.







Cyber Security - Malware Analysis.

 




Malware stands for 'malevolent software,' which refers to any program that engages in malicious behavior. Malware and malicious code are phrases that are used interchangeably. 


Malware comes in a variety of shapes and sizes, as well as several categories, such as viruses, Trojans, worms, spyware, botnet malware, ransomware, and so on. 

Many cyberattacks on the Internet are carried out through malware, including nation-state cyberwar, criminality, fraud, and scams. 

Trojans, for example, may create a backdoor into a government network, allowing nation-state attackers to steal confidential data. Ransomware may encrypt data on a user's computer, rendering it inaccessible to the user, and only allowing the data to be decrypted when the user pays a fee. 


Many Distributed Denial-of-Service (DDoS) assaults, as well as spam and phishing, are carried out via botnet malware. 

To better comprehend cyberattacks and build effective remedies, we need to investigate the mechanisms underlying malware generation and dissemination. 

The complexity and durability of both cyber defense measures and malware technologies and operating models have grown as the political and financial stakes have risen. 

For example, to evade malware detection systems, attackers now use obfuscation techniques like packing and polymorphism, as well as metamorphism , and they set up adaptive network infrastructures on the Internet to support malware updates, command-and-control, and other logistics like data transits. 


In summary, malware research is growing more vital, but also more difficult. 


We'll go through a malware taxonomy, as well as their usual destructive actions, eco-systems, and support infrastructure. 

After that, we'll go over the tools and strategies for analyzing malware behavior, as well as network and host-based detection methods for detecting malware activity, as well as forensic analysis and attribution processes and techniques for responding to malware assaults. 




A MALWARE TAXONOMY.



Malware comes in a variety of forms . The creation of a taxonomy to systematically categorize the huge range of malware kinds is instructive. 


This taxonomy defines the common properties of each form of malware and may therefore be used to drive the creation of countermeasures for an entire malware category (rather than a specific malware). 

Our taxonomy may encompass several dimensions due to the different features of malware technology and attack activities that can be used to categorize and label malware. 

We'll go through a handful of the more crucial ones here. It's worth noting that other, more specialized features, such as target CPU architecture or operating system, might also be employed. 



The first axis of our taxonomy is whether malware is a stand-alone (or independent) program or only a set of instructions to be implanted in another program. 


Once installed and launched on a compromised system, standalone malware is a full application that can function on its own. 

Worms and botnet malware, for example, are examples of this sort of malware. 

The second kind requires the execution of a host program, i.e., it must infect a program on a computer by injecting its instructions into the program, causing the malware instructions to be executed as well. 

Document macro infections and malicious browser plug-ins, for example, are examples of this sort. 

In general, standalone malware is simpler to detect since it is a separate application or running process that may be spotted by the operating system or security solutions. 



The second factor to consider is whether malware is persistent or not. 


The majority of malware is installed in persistent storage (usually a file system) as independent malware or as an infection of another software that already has a persistent storage presence. 

Other malware is memory-resident, which means that once the computer is restarted or the infected running software terminates, it is gone from the system. 

Many anti-virus programs that depend on file scanning are unable to identify memory-resident malware. 

Such ephemeral malware also has the benefit of being simple to clean up (or cover-up) when it has completed its assault. 

The traditional method for malware to become memory-resident is to delete the malicious program (which had previously been downloaded and installed) from the file system as soon as it has been run. 

Newer methods use system administration and security technologies like PowerShell to directly inject malware into memory . 

According to one report , meterpreter malware was downloaded and injected into memory using PowerShell instructions following an initial vulnerability that resulted in the unauthorized execution of PowerShell, and it gathered passwords on the compromised machine. 



The third dimension, which only relates to persistent malware, classifies malware according to whatever tier of the system stack it is installed and runs on. 


Firmware, boot-sector, operating system kernel, drivers and Application Programming Interfaces (APIs), and user applications are the layers in increasing sequence. 

Lower-layer malware is often more difficult to detect and remove, and it causes more damage since it has more power over the affected machine. 

On the other hand, since there are more limits, such as a more constrained programming environment in terms of both the kinds and quantity of code permitted, it is also more difficult to build malware that can be implanted at a lower layer. 



The fourth dimension is whether malware is initiated by a user activity or runs and spreads automatically. 


When auto-spreading malware starts, it searches the Internet for additional susceptible devices, compromises them, and installs itself on them; the malware copies on these newly infected machines then do the same – run and propagate. 

Auto-spreading malware may obviously propagate swiftly via the Internet, exponentially increasing the number of vulnerable systems. 

User-activated malware, on the other hand, is installed on a computer when a user unintentionally downloads and runs it, such as by clicking on an attachment or URL in an email. 

More crucially, while this virus can 'spread' by sending email with itself as an attachment to contacts in the user's address book, this spreading is not effective until the recipient of the email activates the malware. 



The fifth dimension is whether malware is static, or only updated once, or constantly updated. 


Most current malware is backed by infrastructure that allows a hacked computer to get a software update from a malware server, which results in the installation of a new version of the malware on the infected machine. 

There are several advantages to upgrading malware from the attacker's perspective. 

Updated malware, for example, may elude detection systems based on the features of previous malware samples. 



The sixth factor is whether malware operates alone or as part of a larger network (i.e., a botnet). 


While botnets are responsible for many assaults such as DDoS, spam, phishing, and other kinds of targeted attacks, standalone malware has grown more widespread. 

That is, malware may be tailored to infect a target company and carry out destructive operations based on the company's assets that are valuable to the attacker. 

To prevent detection, most current malware employs some type of obfuscation (and hence we do not explicitly include obfuscation in this taxonomy). 

A virus creator may employ a variety of obfuscation methods and tools that are publicly accessible on the Internet. 

Polymorphism, for example, may be exploited to evade detection approaches that rely on malware code 'signatures' or patterns. 

That is, the malware's recognizable traits are altered to make each instance unique. 

As a result, malware instances vary in appearance, yet they all perform the identical malicious functions. 

Packing, which involves compressing and encrypting a portion of the infection, and rewriting recognizable bad instructions into other comparable instructions are two popular polymorphic malware tactics. 



As an example, we may apply this taxonomy to a variety of malware kinds (or names). Take a look at the table below: 





A virus, in particular, requires the execution of a host-program since it infects it by injecting a harmful code sequence into the program. 


When the host software starts, the malicious code is executed, and it might hunt for other applications to infect in addition to completing the specified destructive operations. 

A virus is usually persistent, residing in all layers of the system stack except the hardware. 

It can propagate on its own because it can autonomously insert itself into applications. 


If a virus can connect to a malware update server, it may also be dynamically updated. 


Although the technique for mutation is built in the virus's own code, a polymorphic malware virus may change itself so that fresh copies seem different. 

A virus isn't usually part of a coordinated network since, although it may infect a large number of computers, the viral code doesn't usually undertake coordinated actions. 


Malicious browser plug-ins and extensions, scripts (e.g., JavaScript on a web page), and document macros are examples of malware that need a host application (e.g., macro viruses and PDF malware). 


These sorts of malware may be disguised and constantly updated. 

They can also create a coordinated network. 

Any malware that is part of a coordinated network with a botnet architecture that offers command-and-control is referred to as botnet malware. 

Malware updates and other logistical assistance are frequently provided via botnet infrastructure. 


Botnet malware is persistent and frequently obfuscated, living in the kernel, driver, or application layers. 


Some botnet malware, such as malicious browser plug-ins and extensions, need a host application and user activation to propagate (e.g., malicious JavaScript). 

Other botnet malware is stand-alone and spreads automatically across the Internet by infecting weak machines or users. 

Trojans, keyloggers, ransomware, click bots, spam bots, mobile malware, and other malware are examples. 



PUPs (Potentially Unwanted Programs) 


A potentially undesirable program (PUP) is usually a piece of code that is downloaded as part of a valuable software. 


When a user downloads the free edition of a mobile gaming app, for example, adware, a kind of PUP that shows ad banners on the game window, may be installed. 

Adware often gathers user data (such as geo-location, time spent playing the game, friends, and so on) without the user's knowledge or agreement in order to provide more targeted adverts to the user and increase the advertising's efficacy. 

Adware is classified as spyware in this scenario, which is described as an undesirable application that steals data about a computer and its users. 


PUPs exist in a gray area because, although most download agreements offer information about these dubious behaviors, most users do not study the finer details and so do not comprehend what they are installing. 


PUPs should be classified as malware from the standpoint of cybersecurity, and this is the approach used by many protection solutions. 

The basic explanation is that a PUP has the ability to evolve into full-fledged malware; once installed, the user is completely dependent on the PUP operator. 

For example, spyware included in a spellchecker browser extension might collect information about the user's preferred websites. 

It may, however, collect user account information such as logins and passwords. 

In this situation, the spyware has evolved from a PUP to a malware.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Cyber Security Systems here.










Cyber Security - Location and Context-Awareness.


 


Context-Aware IMS Solutions came about with the development in the number of mobile devices and mobile paradigms. It became a necessity for IMSs to consider the location of users [53]. 


1. Location-Based Services.


Location-Based Services (LBSs) are systems that deliver information based on the location of people or devices [54]. 

Some LBSs go a step further in delivering valuable services by using the users' location to infer additional information about the area. 

To get there, existing IMSs track users' whereabouts so that they may be taken into account during management operations. 

An LBS may be either person-oriented or device-oriented, depending on the emphasis of services. 


2. Scenarios for Context-Aware Applications. 

This section depicts many situations in which the PBM paradigm aids IMSs in the processing and protection of information, as well as the configuration and behavior management of systems. 


3. Proposals for Context-Awareness

Many context-aware services have been suggested in recent years in attempt to make life simpler. Despite the fact that the word "context" was coined in 1994, the first context-aware solution in the literature was offered in 1991 [69]. 






~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Cyber Security Systems here.





References & Further Reading:



1. OSI. Information Processing Systems-Open System Inteconnection-Systems Management Overview. ISO 10040, 1991.

2. Jefatura del Estado. Ley Orgánica de Protección de Datos de Carácter Personal. www.boe.es/boe/dias/1999/12/14/pdfs/A43088-43099.pdf.

3. D. W. Samuel, and D. B. Louis. The right to privacy. Harvard Law Review, 4(5): 193–220, 1890.

4. A. Westerinen, J. Schnizlein, J. Strassner, M. Scherling, B. Quinn, S. Herzog, A. Huynh, M. Carlson, J. Perry, and S. Waldbusser. Terminology for Policy-Based Management. IETF Request for Comments 3198, November 2001.

5. B. Moore. Policy Core Information Model (PCIM) Extensions. IETF Request for Comments 3460, January 2003.

6. S. Godik, and T. Moses. OASIS EXtensible Access Control Markup Language (XACML). OASIS Committee Specification, 2002.

7. A. Dardenne, A. Van Lamsweerde and S. Fickas. Goal-directed requirements acquisition. Science of Computer Programming, 20(1–2): 3–50, 1993.

8. F. L. Gandon, and N. M. Sadeh. Semantic web technologies to reconcile privacy and context awareness. Web Semantics: Science, Services and Agents on the World Wide Web, 1(3): 241–260, April 2004.

9. I. Horrocks. Ontologies and the semantic web. Communications ACM, 51(12): 58–67, December 2008.

10. R. Boutaba and I. Aib. Policy-based management: A historical perspective. Journal of Network and Systems Management, 15(4): 447–480, 2007.

11. P. A. Carter. Policy-Based Management, In Pro SQL Server Administration, pages 859–886. Apress, Berkeley, CA, 2015.

12. D. Florencio, and C. Herley. Where do security policies come from? In Proceedings of the 6th Symposium on Usable Privacy and Security, pages 10:1–10:14, 2010.

13. K. Yang, and X. Jia. DAC-MACS: Effective data access control for multi-authority Cloud storage systems, IEEE Transactions on Information Forensics and Security, 8(11): 1790–1801, 2014.

14. B. W. Lampson. Dynamic protection structures. In Proceedings of the Fall Joint Computer Conference, pages 27–38, 1969.

15. B. W. Lampson. Protection. ACM SIGOPS Operating Systems Review, 8(1): 18–24, January 1974.

16. D. E. Bell and L. J. LaPadula. Secure Computer Systems: Mathematical Foundations. Technical report, DTIC Document, 1973.

17. D. F. Ferraiolo, and D. R. Kuhn. Role-based access controls. In Proceedings of the 15th NIST-NCSC National Computer Security Conference, pages 554–563, 1992.

18. V. P. Astakhov. Surface integrity: Definition and importance in functional performance, In Surface Integrity in Machining, pages 1–35. Springer, London, 2010.

19. K. J. Biba. Integrity Considerations for Secure Computer Systems. Technical report, DTIC Document, 1977.

20. M. J. Culnan, and P. K. Armstrong. Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10(1): 104–115, 1999.

21. A. I. Antón, E. Bertino, N. Li, and T. Yu. A roadmap for comprehensive online privacy policy management. Communications ACM, 50(7): 109–116, July 2007.

22. J. Karat, C. M. Karat, C. Brodie, and J. Feng. Privacy in information technology: Designing to enable privacy policy management in organizations. International Journal of Human Computer Studies, 63(1–2): 153–174, 2005.

23. M. Jafari, R. Safavi-Naini, P. W. L. Fong, and K. Barker. A framework for expressing and enforcing purpose-based privacy policies. ACM Transaction Information Systesms Security, 17(1): 3:1–3:31, August 2014.

24. G. Karjoth, M. Schunter, and M. Waidner. Platform for enterprise privacy practices: Privacy-enabled management of customer data, In Proceedings of the International Workshop on Privacy Enhancing Technologies, pages 69–84, 2003.

25. S. R. Blenner, M. Kollmer, A. J. Rouse, N. Daneshvar, C. Williams, and L. B. Andrews. Privacy policies of android diabetes apps and sharing of health information. JAMA, 315(10): 1051–1052, 2016.

26. R. Ramanath, F. Liu, N. Sadeh, and N. A. Smith. Unsupervised alignment of privacy policies using hidden Markov models. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 605–610, June 2014.

27. J. Gerlach, T. Widjaja, and P. Buxmann. Handle with care: How online social network providers’ privacy policies impact users’ information sharing behavior. The Journal of Strategic Information Systems, 24(1): 33–43, 2015.

28. O. Badve, B. B. Gupta, and S. Gupta. Reviewing the Security Features in Contemporary Security Policies and Models for Multiple Platforms. In Handbook of Research on Modern Cryptographic Solutions for Computer and Cyber Security, pages 479–504. IGI Global, Hershey, PA, 2016.

29. K. Zkik, G. Orhanou, and S. El Hajji. Secure mobile multi cloud architecture for authentication and data storage. International Journal of Cloud Applications and Computing 7(2): 62–76, 2017.

30. C. Stergiou, K. E. Psannis, B. Kim, and B. Gupta. Secure integration of IoT and cloud computing. In Future Generation Computer Systems, 78(3): 964–975, 2018.

31. D. C. Verma. Simplifying network administration using policy-based management. IEEE Network, 16(2): 20–26, March 2002.

32. D. C. Verma. Policy-Based Networking: Architecture and Algorithms. New Riders Publishing, Thousand Oaks, CA, 2000.

33. J. Rubio-Loyola, J. Serrat, M. Charalambides, P. Flegkas, and G. Pavlou. A methodological approach toward the refinement problem in policy-based management systems. IEEE Communications Magazine, 44(10): 60–68, October 2006.

34. F. Perich. Policy-based network management for next generation spectrum access control. In Proceedings of International Symposium on New Frontiers in Dynamic Spectrum Access Networks, pages 496–506, April 2007.

35. S. Shin, P. A. Porras, V. Yegneswaran, M. W. Fong, G. Gu, and M. Tyson. FRESCO: Modular composable security services for Software-Defined Networks. In Proceedings of the 20th Annual Network and Distributed System Security Symposium, pages 1–16, 2013.

36. K. Odagiri, S. Shimizu, N. Ishii, and M. Takizawa. Functional experiment of virtual policy based network management scheme in Cloud environment. In International Conference on Network-Based Information Systems, pages 208–214, September 2014.

37. M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker. Ethane: Taking control of the enterprise. In Proceedings of Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, pages 1–12, August 2007.

38. M. Wichtlhuber, R. Reinecke, and D. Hausheer. An SDN-based CDN/ISP collaboration architecture for managing high-volume flows. IEEE Transactions on Network and Service Management, 12(1): 48–60, March 2015.

39. A. Lara, and B. Ramamurthy. OpenSec: Policy-based security using Software-Defined Networking. IEEE Transactions on Network and Service Management, 13(1): 30–42, March 2016.

40. W. Jingjin, Z. Yujing, M. Zukerman, and E. K. N. Yung. Energy-efficient base stations sleep-mode techniques in green cellular networks: A survey. IEEE Communications Surveys Tutorials, 17(2): 803–826, 2015.

41. G. Auer, V. Giannini, C. Desset, I. Godor, P. Skillermark, M. Olsson, M. A. Imran, D. Sabella, M. J. Gonzalez, O. Blume, and A. Fehske. How much energy is needed to run a wireless network?IEEE Wireless Communications, 18(5): 40–49, 2011.

42. W. Yun, J. Staudinger, and M. Miller. High efficiency linear GaAs MMIC amplifier for wireless base station and Femto cell applications. In IEEE Topical Conference on Power Amplifiers for Wireless and Radio Applications, pages 49–52, January 2012.

43. M. A. Marsan, L. Chiaraviglio, D. Ciullo, and M. Meo. Optimal energy savings in cellular access networks. In IEEE International Conference on Communications Workshops, pages 1–5, June 2009.

44. H. Claussen, I. Ashraf, and L. T. W. Ho. Dynamic idle mode procedures for femtocells. Bell Labs Technical Journal, 15(2): 95–116, 2010.

45. L. Rongpeng, Z. Zhifeng, C. Xianfu, J. Palicot, and Z. Honggang. TACT: A transfer actor-critic

learning framework for energy saving in cellular radio access networks. IEEE Transactions on Wireless Communications, 13(4): 2000–2011, 2014.

46. G. C. Januario, C. H. A. Costa, M. C. Amarai, A. C. Riekstin, T. C. M. B. Carvalho, and C. Meirosu. Evaluation of a policy-based network management system for energy-efficiency. In IFIP/IEEE International Symposium on Integrated Network Management, pages 596–602, May 2013.

47. C. Dsouza, G. J. Ahn, and M. Taguinod. Policy-driven security management for fog computing: Preliminary framework and a case study. In Conference on Information Reuse and Integration, pages 16–23, August 2014.

48. H. Kim and N. Feamster. Improving network management with Software Defined Networking. IEEE Communications Magazine, 51(2): 114–119, February 2013.

49. O. Gaddour, A. Koubaa, and M. Abid. Quality-of-service aware routing for static and mobile IPv6-based low-power and loss sensor networks using RPL. Ad Hoc Networks, 33: 233–256, 2015.

50. Q. Zhao, D. Grace, and T. Clarke. Transfer learning and cooperation management: Balancing the quality of service and information exchange overhead in cognitive radio networks. Transactions on Emerging Telecommunications Technologies, 26(2): 290–301, 2015.

51. M. Charalambides, P. Flegkas, G. Pavlou, A. K. Bandara, E. C. Lupu, A. Russo, N. Dulav, M. Sloman, and J. Rubio-Loyola. Policy conflict analysis for quality of service management. In Proceedings of the 6th IEEE International Workshop on Policies for Distributed Systems and Networks, pages 99–108, June 2005.

52. M. F. Bari, S. R. Chowdhury, R. Ahmed, and R. Boutaba. PolicyCop: An autonomic QoS policy enforcement framework for software defined networks. In 2013 IEEE SDN for Future Networks and Services, pages 1–7, November 2013.

53. C. Bennewith and R. Wickers. The mobile paradigm for content development, In Multimedia and E-Content Trends, pages 101–109. Vieweg+Teubner Verlag, 2009.

54. I. A. Junglas, and R. T. Watson. Location-based services. Communications ACM, 51(3): 65–69, March 2008.

55. M. Weiser. The computer for the 21st century. Scientific American, 265(3): 94–104, 1991.

56. G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles. Towards a better understanding of context and context-awareness. In Handheld and Ubiquitous Computing, pages 304–307, September 1999.

57. B. Schilit, N. Adams, and R. Want. Context-aware computing applications. In Proceeding of the 1st Workshop Mobile Computing Systems and Applications, pages 85–90, December 1994.

58. N. Ryan, J. Pascoe, and D. Morse. Enhanced reality fieldwork: The context aware archaeological assistant. In Proceedings of the 25th Anniversary Computer Applications in Archaeology, pages 85–90, December 1997.

59. A. K. Dey. Context-aware computing: The CyberDesk project. In Proceedings of the AAAI 1998 Spring Symposium on Intelligent Environments, pages 51–54, 1998.

60. P. Prekop and M. Burnett. Activities, context and ubiquitous computing. Computer Communications, 26(11): 1168–1176, July 2003.

61. R. M. Gustavsen. Condor-an application framework for mobility-based context-aware applications. In Proceedings of the Workshop on Concepts and Models for Ubiquitous Computing, volume 39, September 2002.

62. C. Tadj and G. Ngantchaha. Context handling in a pervasive computing system framework. In 

Proceedings of the 3rd International Conference on Mobile Technology, Applications and Systems, 

pages 1–6, October 2006.

63. S. Dhar and U. Varshney. Challenges and business models for mobile location-based services and advertising. Communications ACM, 54(5): 121–128, May 2011.

64. F. Ricci, L. Rokach, and B. Shapira. Recommender Systems: Introduction and Challenges, pages In Recommender Systems Handbook, pages 1–34. Springer, Boston, MA, 2015.

65. J. B. Schafer, D. Frankowski, J. Herlocker, and S. Sen. Collaborative Filtering Recommender Systems, In The Adaptive Web, pages 291–324. Springer, Berlin, Heidelberg, 2007.

66. P. Lops, M. de Gemmis, and G. Semeraro. Content-Based Recommender Systems: State of the Art and Trends, In Recommender Systems Handbook, pages 73–105. Springer, Boston, MA, 2011.

67. D. Slamanig and C. Stingl. Privacy aspects of eHealth. In Proceedings of Conference on Availability, Reliability and Security, pages 1226–1233, March 2008.

68. C. Wang. Policy-based network management. In Proceedings of the International Conference on Communication Technology, volume 1, pages 101–105, 2000.

69. R. Want, A. Hopper, V. Falcao, and J. Gibbons. The active badge location system. ACM Transactions on Information Systems, 10(1): 91–102, January 1992.

70. K. R. Wood, T. Richardson, F. Bennett, A. Harter, and A. Hopper. Global teleporting with Java: Toward ubiquitous personalized computing. Computer, 30(2): 53–59, February 1997.

71. C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos. Context aware computing for the Internet of Things: A survey. IEEE Communications Surveys Tutorials, 16(1): 414–454, 2014.

72. B. Guo, L. Sun, and D. Zhang. The architecture design of a cross-domain context management system. In Proceedings of Conference Pervasive Computing and Communications Workshops, pages 499–504, April 2010.

73. A. Badii, M. Crouch, and C. Lallah. A context-awareness framework for intelligent networked embedded systems. In Proceedings of Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies and Services, pages 105–110, August 2010.

74. S. Pietschmann, A. Mitschick, R. Winkler, and K. Meissner. CroCo: Ontology-based, crossapplication context management. In Proceedings of Workshop on Semantic Media Adaptation and Personalization, pages 88–93, December 2008.

75. T. Gu, X. H. Wang, H. K. Pung, and D. Q. Zhang. An ontology-based context model in intelligent environments. In Proceedings of Communication Networks and Distributed Systems Modeling and Simulation Conference, pages 270–275, January 2004.

76. H. Chen, T. Finin, and A. Joshi. An ontology for context-aware pervasive computing environments. The Knowledge Engineering Review, 18(03): 197–207, September 2003.

77. D. Ejigu, M. Scuturici, and L. Brunie. CoCA: A collaborative context-aware service platform for pervasive computing. In Proceedings of Conference Information Technologies, pages 297–302, April 2007.

78. R. Yus, E. Mena, S. Ilarri, and A. Illarramendi. SHERLOCK: Semantic management of location based services in wireless environments. Pervasive and Mobile Computing, 15: 87–99, 2014.

79. L. Tang, Z. Yu, H. Wang, X. Zhou, and Z. Duan. Methodology and tools for pervasive application development. International Journal of Distributed Sensor Networks, 10(4): 1–16, 2014.

80. B. Bertran, J. Bruneau, D. Cassou, N. Loriant, E. Balland, and C. Consel. DiaSuite: A tool suite to develop sense/compute/control applications. Science of Computer Programming, 79: 39–51, 2014.

81. P. Jagtap, A. Joshi, T. Finin, and L. Zavala. Preserving privacy in context-aware systems. In Proceedings of Conference on Semantic Computing, pages 149–153, September 2011.

82. V. Sacramento, M. Endler, and F. N. Nascimento. A privacy service for context-aware mobile computing. In Proceedings of Conference on Security and Privacy for Emergency Areas in Communication Networks, pages 182–193, September 2005.

83. A. Huertas Celdrán, F. J. García Clemente, M. Gil Pérez, and G. Martínez Pérez. SeCoMan: A 

semantic-aware policy framework for developing privacy-preserving and context-aware smart applications. IEEE Systems Journal, 10(3): 1111–1124, September 2016.

84. J. Qu, G. Zhang, and Z. Fang. Prophet: A context-aware location privacy-preserving scheme in location sharing service. Discrete Dynamics in Nature and Society, 2017, 1–11, Article ID 6814832, 2017.

85. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. PRECISE: Privacy-aware recommender based on context information for Cloud service environments. IEEE Communications Magazine, 52(8): 90–96, August 2014.

86. S. Chitkara, N. Gothoskar, S. Harish, J.I. Hong, and Y. Agarwal. Does this app really need my location? Context-aware privacy management for smartphones. In Proceedings of the ACM Interactive Mobile, Wearable and Ubiquitous Technologies, 1(3): 42:1–42:22, September 2017.

87. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. What private information are you disclosing? A privacy-preserving system supervised by yourself. In Proceedings of the 6th International Symposium on Cyberspace Safety and Security, pages 1221–1228, August 2014.

88. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. MASTERY: A multicontext-aware system that preserves the users’ privacy. In IEEE/IFIP Network Operations and Management Symposium, pages 523–528, April 2016.

89. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Preserving patients’ privacy in health scenarios through a multicontext-aware system. Annals of Telecommunications, 72(9–10): 577–587, October 2017.

90. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Policy-based management for green mobile networks through software-defined networking. Mobile Networks and Applications, In Press, 2016.

91. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Enabling highly dynamic mobile scenarios with software defined networking. IEEE Communications Magazine, Feature Topics Issue on SDN Use Cases for Service Provider Networks, 55(4): 108–113, April 2017. 






What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...