SATISFIABILITY REASONING OVER VAGUE ONTOLOGIES USING FUZZY SOFT SET THEORY
ABSTRACT
Ontology is an explicit representation of conceptualizations, which is widely used in modelling real world domains by defining their shared vocabularies such that they can be understood by both human and machines for the purpose of information sharing. Description Logic (DL) is a knowledge representation language that is widely used in building ontologies as well as providing the foundation in which modern web ontologies languages such as OWL are built upon. Classical ontology definitions contain concepts and relations that describe asserted facts about the real world. Earlier studies on ontologies overlooked the representation of uncertainty in their formalizations. However, for a real world domain to be fully modelled, its uncertain aspects must be reflected and appropriately represented. This work propose a satisfiability reasoning algorithm based on fuzzy soft set theory in order to reason about the uncertain aspect of an ontology of vague domain. The proposed algorithm was evaluated by applying it on some vague ontologies and the result was compared with the tableaux based and the soft set ontology reasoning techniques. The obtained result shows that, the proposed algorithm is satisfiable when fuzzy concepts and assertions are involved in an ontology representation while such fuzzy conceptions are not handled by both tableaux based and soft set ontology procedures.
Chapter One
Introduction
1.1.Background to the Study
The use of information from heterogeneous sources is an intelligent task that requires human being who has a background knowledge of the information. However, due to existence of several information sources, human processing speed cannot be relied upon for speedy information processing. In contrast, computers can easily deal with such voluminous information as long as their processing do not require human intelligence. Therefore, for information to be processed efficiently, its processing must be automated. Almost all information can be represented in natural language, this richness of natural language, however, makes it very difficult to process computationally. The traditional computational processing of information involved a pattern matching process, a literal character by character comparison of the words in natural languages representing information. This simplistic computational processing approach is known as syntactic information processing. On the contrary, semantics information representation provides a universal understanding of information (both by human and machine) and leads to automated information processing. This can be achieved by attaching meaning to letters, words, phrases, signs, and symbols. Semantic information processing is seen as a means of resolving the problem of ambiguities in syntactic information processing (Richardson, 1994).
While talking about automated processing for natural language, Miller (1995) stated that, because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable
dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines.
According to Kana and Akinkunmi (2014), for the semantic processing of information to be possible, systems must be able to understand the meaning of data they are processing and then, perform the processing semantically. To achieve that, three key issues must be resolved:
- Information should be represented in such a way that, its semantics is contained within its representation and should be unambiguous.
- There should be a possibility of deducing the semantic of the data represented by machines possibly with some inference capability.
- There should be a possibility of two or more system processing related information to interoperate.
Traditionally, data representation and processing is only limited to the syntactic level only, which cannot achieve the semantic goal. It is unanimously agreed upon that ontological representation of knowledge is providing the necessary solution for achieving a successful semantic information representation.
A clearer definition of an ontology was provided by Sowa (2000) as:
“The study of the categories of things that exist or may exist in some domain. The product of such a study, called an ontology, is a catalog of the types of things that are assumed to exist in a domain of interest D from the perspective of a person who uses a language L for the purpose of talking about D. The types in the ontology represent the predicates, word senses, or concept and relation types of the language L when used to discuss topics in the domain D.”
To provide common understandings in domains, logic based languages are needed for a good inference mechanism that will facilitate the reasoning on the content of the domain modelled
Such languages are potential candidates for the representation of information for the semantic processing.
According to Laskey et al. (2008), modelling the uncertain aspect of the world in ontologies has attracted a lot of interest to ontology developers in the field of Artificial Intelligence (AI) especially in the World Wide Web (WWW) community. The WWW community envisions:
- Effortless interaction between humans and computers.
- Seamless interoperability and information exchange among web applications, and
- Rapid and accurate identification and invocation of appropriate Web services.
As works with semantics grows more motivating, there is an increasing appreciation of the need for principled approaches in representing and reasoning under uncertainty. Uncertainty is the situation which involves imperfect and/or unknown information. The term “uncertainty” encompasses a variety of aspects of imperfect knowledge, including incompleteness, vagueness, ambiguity, and others (Laskey et al., 2008).
Hence, uncertainty in ontologies needs to be tackled in order to achieve valid inferences in artificial intelligence in anticipation of the visions of WWW community.
Over the past, there has been lots of efforts made by researchers to achieve this goal, among which are fuzzy sets by Zadeh (1965) and theory of rough sets by Pawlak (1982). All these theories have their inherent difficulties as pointed out by Maji et al. (2001). One major problem existing in these theories is their incompatibility with the parameterization tools. To overcome these difficulties, Molodtsov (1999) initiated the concept of soft set which can be used as a generic tool for dealing with uncertainty. However, it was pointed out in Roy and Maji (2007) that classical soft set is not appropriate to deal with imprecise and fuzzy parameters. On this basis, Maji et al. (2001) introduced the concept of the fuzzy soft set, a more generalized concept, which is a combination of fuzzy set and soft set. In order to handle fuzzy parameters
Contents