Noesis Integration

Scientific discipline Instruction

Thou.C. Linn , in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.i Designing Accessible Ideas

To promote knowledge integration, students need a designed curriculum that includes pivotal cases and bridging analogies to assist them learn. Rather than asking experts to identify the most sophisticated ideas, designers demand to select the virtually accessible and generative ideas to add to the mix of educatee views. College physics courses generally start with Newton rather than Einstein; in pre-college courses one might start with everyday examples from the playground rather than the more elegant but less understandable frictionless problems.

To brand the process of lifelong noesis integration accessible, students demand some experience with sustained, complex research. Carrying out projects such every bit developing a recycling plan for a school or researching possible remedies for the worldwide threat of malaria, engage students in the procedure of scientific inquiry and can constitute lifelong learning skills. Often, all the same, scientific discipline courses fail projects or provide less cognition integration intensive experiences such every bit a full general introduction of critical thinking or hands-on recipes for solving unambiguous problems. Past using computer learning environments to help guide students as the bear out complex projects curriculum designers tin foster a robust understanding of research (Driver et al. 1996).

Projects take instructional fourth dimension, crave guidance for individual students, bring the complexities of science to life, and depend on well-designed questions. Students often confuse variables such as nutrient and appetite, rely on flawed arguments from advertisements or other sources, and flounder because they lack criteria for critiquing their own progress. For case, when students critique projects, they may comment on neatness and spelling, rather than looking for flaws in an statement. Many teachers avert projects considering they have non developed the pedagogical skills necessary to mentor students, deal with the uncertainties of contemporary science dilemmas, or design researchable questions. Inquiry shows that computer learning environments can make complex projects more successful past scaffolding research, providing help and hints, and freeing teachers to collaborate with students virtually complex science issues (Feurzeig and Roberts 1999, Linn and Hsi 2000, White and Frederiksen 1998).

Read total chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B0080430767024414

Promoting metacognition

Barbara Blummer , Jeffrey M. Kenton , in Improving Educatee Information Search, 2014

Importance of matching prompt blazon to individuals' learning needs

Research demonstrates the effectiveness of prompts in promoting students' metacognitive behaviors. Yet, studies note the importance of considering the individual'southward mental capacities in designing these interventions. Ge et al. (2005) looked at different types of questions such as elaboration, question guidance, or no question prompts to determine the role of prompts on graduate students' cognitive and metacognitive strategies in the trouble-solving practices on the web. Their findings supported the role of question prompts in fostering cognitive and metacognitive activities to support ill-structured problem solving. They linked the effectiveness of prompt type to the user and suggested that "question prompts required relevant prior knowledge and sufficient schema in guild to be effective" (p. 234). Ge et al. ended that question prompts worked best "when students had sufficient schemata" about a domain, "when they were free of pre-assumptions" with poor trouble solvers, and to "facilitate noesis and metacognition" (p. 235).

Generic and straight prompts

As well, Davis (2003) focused on the differences between generic and direct prompts in promoting reflection amid heart school students working with the Knowledge Integration Environment software and curricula. She described generic prompts as open up concluded and aimed at encouraging students to call back aloud. On the other manus, she defined directed prompts as offering "hints about what to think about," and geared for fostering planning and monitoring (p. 102). In this inquiry, middle school students worked in pairs to locate science information from the web, and their completed projects constituted the principal source of the information for the study. Students' beliefs were measured among 3 dimensions, including autonomy that viewed science learning from the perspective of internal or external responsibleness. Davis concluded that the generic prompts promoted "productive reflection," which fostered students' abilities to "expand their repertories of ideas and identify weaknesses in their knowledge" that facilitated "cognition integration" (p. 116). She argued that these prompts remained particularly suited for middle autonomy students in helping them to "perform at higher levels of coherence than directed prompts" (p. 126).

Ifenthaler (2012) reported like results in his study that compared the effectiveness of generic and direct prompts in promoting self-regulated learning among 98 higher students. In this instance, students were assigned to 1 of three groups: generic, directed, or no prompts. Students read an article about the affect of virus inflections on the immune system and answered questions about influenza and HIV infection. Students also created a concept map to illustrate their understanding of the process. According to the results, participants in the generic group gained more domain-specific knowledge as well as structural and "semantic understanding of the problem scenario" (p. 48). Ifenthaler concluded that generic prompts provided more autonomy for learners in self-regulating their trouble solving. Still, the author pointed out that directed prompts may exist more effective for students who lack a set of trouble-solving skills.

Similarly, Ge and Land (2003) studied the effect of question prompts in collaborative grouping problem solving; they suggested they were effective in helping students focus on the questions, only of limited value in fostering individuals' question generation, elaboration, or clarification of their own or their peers' understanding. Likewise, Davis (2000) compared centre schoolhouse science students' apply of action and self-monitoring prompts, and ended that students used the prompts in dissimilar ways. Davis argued that "prompts can play a variety of roles in learning environments" if designers "craft the grade and frequency of the prompt feel" (p. 834).

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9781843347811500098

Utilizing the tutorial

Barbara Blummer , Jeffrey M. Kenton , in Improving Educatee Data Search, 2014

Idea tactics for middle school students

Inquiry by Davis (2003) on the usefulness of generic prompts in helping middle school students develop an understanding of scientific discipline concepts suggests that the type of intervention offered past the idea tactics would besides enhance their web search. Davis employed the Knowledge Integration Environment that "uses judgement-starter prompts to foster both metacognitive and sensemaking activities" (p. 94). The author concluded that generic prompts led to "productive reflection that in turn helps students expand their repertoires of ideas and identify weaknesses in their noesis" (p. 116). Similarly, the idea tactics present judgement fragments that encourage reflection equally well as monitoring and planning. These concepts could also be used in a web page or portal learning environs to provoke reflective thought and back up understanding for middle school students.

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/commodity/pii/B9781843347811500189

Detection of Conflicts in Security Policies

Cataldo Basile , ... Stefano Paraboschi , in Figurer and Information Security Handbook (Third Edition), 2013

Unique Name Assumption

Unique Name Assumption is a commonly accepted assumption in nigh model-driven tools. Information technology consists of assuming that dissimilar names will ever denote different elements in the model. This is normally not true in DL reasoners considering of the essential nature of knowledge integration bug. In fact, in the Semantic Web scenario, dissimilar authors may describe the same entities (both shared conceptualizations and physical objects), assigning a new name, generally in the form of a Compatible Resource Identifier, defined independently from other users.

These properties must be considered advisedly when applying DL and Semantic Web tools to the detection of policy conflicts. The obstacles introduced can be solved as long every bit attention is paid to them. Misbehaviors of the system tin be observed otherwise.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128038437000557

Library consortia in China: cooperation, sharing, and reciprocity

Conghui Fang , in Chinese Librarianship in the Digital Era, 2013

Online resources and special resource integration service mode

An example of this is the Jiangsu cardinal subject area navigation organization, which is organized past JALIS and constructed by Hohai University Library. Another is the Jiangsu Special Resources Database Service System, synthetic past Suzhou University Library. These conduct deep processing and knowledge integration on special information resources and internet information resources in the universities of the consortia, fix online resource navigation and special resources databases, and provide a knowledge acquisition service for users in consortia. Another instance is the Pedagogy and Scientific Research Digital Library that is being adult by Beijing Academic Online Library. Considering the characteristics of academic instruction and scientific research, and based on subject and curriculum, the Teaching and Scientific Research Digital Library integrated resources of electronic books, texts, audio, video, and pictures into 12 databases according to bailiwick, degree authorization spot, curriculum, scientific projection, expert and scholar, ancient figure, institution, work, academic dissertation, network, and terminology and conference – and is open to users in consortia via the internet ( Fan and Wang, 2008).

The document universal circulation and borrowing service model mainly refers to the service of sharing the collected paper document resources in consortia. For instance, JALIS carries out such an activity. The service objects are the students, postgraduates, teachers, and researchers in ordinary universities of Jiangsu. The service area is libraries in ordinary universities of Jiangsu. Any reader who holds the "universal library card of Jiangsu universities" can enjoy the service of borrowing and reading in academic libraries of Jiangsu Province according to the rules of each private library.

Two representative regional library consortia volition at present be introduced, which have some differences in system, operational model, and achievement accomplished, and which have had a corking effect on other regional consortia.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781843347071500082

21st European Symposium on Computer Aided Process Engineering

Edrisi Muñoz , ... Luis Puigjaner , in Computer Aided Chemical Engineering, 2011

three Development of software architecture

The proposed informatics arrangement allows the utilization of the ontology equally a common model between actors, thus facilitating the advice and noesis reuse among them. Even more than, due to the electric current lack of integration between the different command levels (Purdue Reference Model) the ontology eases the decision support task past providing noesis integration amid decision layers. The architecture of the proposed Ontological framework is based on BaPrOn (batch Process Ontology) described in Muñoz et al. (2010). One of the major requirements considered for this software architecture is that the system should be constructed in an open source, modular fashion (Horridge et al., 2007). In this mode this ontology could exist considered as nominal ontology and high ontology as well, taking into business relationship the factors that influence the complexity, such as concepts, taxonomy, patterns, constraints and instances. The software Protégé was used for the generation of the ontological model using OWL (Ontology Web Language). The OWL language has the expressive power needed to represent the different domains of the solution we want to explore. This unifying aspect, for instance, may make it easier to establish, through collaboration and consensus, the utilitarian vocabularies (e.g. ontologies) needed for far-flung cooperative and integrative applications using the Word Wide Web and internal servers.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780444537119501309

Intelligent Control

A. Meystel , in Encyclopedia of Concrete Science and Technology (Third Edition), 2003

II.F.one Blend of the Disciplines: Command Theory, Artificial Intelligence, and Operation Research

Then, how is it possible to control the motion and, more than generally, the development of any kind of system, contained of their complexity, of our capability of separating it from the surroundings and localizing it, of the context, of the forms of knowledge bachelor, and the methods for its representation? One cannot do it using the control theory solely. Its tools, notwithstanding powerful, do not see across the myopic constraints imposed by the designer and hidden within the machinery of DIC.

One cannot exercise it using artificial intelligence theory (is there any?): AI surrenders when fourth dimension-dependent dynamic processes are involved. It often cannot exist included when stability and controllability are required. Both control theory and AI cannot operate from the OR paradigm: its queues and game situations are typical for the variety of applications. Intelligent control intends to fuse these iii disciplines together when required. The alloy seems to be powerful and flexible.

Intelligent control, as a discipline, is expected to provide a generalization of the existing control disciplines on the basis of (i) combined analysis of the constitute and its control criteria, with the arrangement of goals and metagoals that determine the process of negotiations through the overall design process; (2) processes of multisensor operation with data (cognition) integration and recognition in the loop; (3) human being–auto cooperative activities, including imitation and substitution of the human operator; and (4) estimator structures representing the above-mentioned elements.

Here we meet a shadowy expanse in intelligent control. None of the elements mentioned are supported by substantial research and analysis. In that location is no established terminology identifiable with the area of intelligent control. The word "intelligent" is used in a diverseness of contexts with dissimilar nuances of meaning, which contributes to confusion in a number of cases. A tremendous inertia has resulted due to following the conventional recommendations and views, which hinders the development of intelligent control ideas and methods.

Intelligent control emerged from the intersection of a number of of import scientific disciplines: automatic control, bogus intelligence, and operation research. Interest in this subject field began in the early 1980s, however, the "birthdate" of intelligent command can be considered the date of the showtime meeting of scientists interested in intelligent control (Troy, New York, 1985). This coming together was a attestation to the existence of substantial interest in this phenomenon. At the first coming together, the principles of intelligent control as a discipline were put together. Afterwards the offset meeting, the IEEE Technical Committee on Intelligent Control was organized. About 200 IEEE members entered the new committee. An interesting discussion was in progress dedicated to the definition of the intelligent command as well as to a possible syllabus of courses on intelligent command. At the 2d meeting, the discussions connected, demonstrating growth in interest in the new emerging discipline (Philadelphia, 1987). The second meeting built on the principles obtained at the first meetings and highlighted many of the future perspectives. This coming together had more sponsors and broader participation (at the first meeting only delegates from the U.s. participated; at the 2d meeting there were delegates from Europe, Nippon, and other developing countries). The 3rd international conference (Washington, DC, 1988) equally certained a number of conventions in terminology as being accustomed by nearly of the participants (several hundred scientists became members of the Technical Commission). Since 1988 the Symposium has been held by IEEE annually and several journals dedicated to intelligent control are published.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105003483

Architecture as a Cardinal Driver for Agile Success

Ben Isotta-Riches , Janet Randell , in Agile Software Compages, 2014

15.three.i.2 Continuing compages and pattern activeness during sprints

Once sufficient up-front analysis and design has been completed, at that place are a number of other key points that should exist considered regarding the approach needed to proceed with an emergent architecture and design approach throughout the balance of the project. These include:

Inclusion of solution architects in projection teams

Every bit we take previously discussed, the complication of the Aviva U.k. IT manor means that developers will often have insufficient understanding of the end-to-end solution for a particular business change project. Nosotros have a team of solution architects who provide this integration knowledge and expertise and develop the loftier-level, end-to-terminate solution architecture and design. These solution architects are involved in the initial architecture and design activity on agile projects, and will usually exist required to remain involved with the project team to assist in the ongoing development of the emergent architecture.

"Just in time" approach to emergent architecture and design

Compages and design uncertainties that remain unresolved during the project initiation cannot necessarily be left to exist resolved within a dart. In fact, this is rarely advisable. At this point, the estimated attempt required to resolve each dubiousness becomes a key gene in planning the arroyo to resolution. Resolving too early is likely to waste matter time and resource, since there may well be insufficient data bachelor, and requirements are probable to alter, impacting the validity of the agreed resolution. Leaving resolution too late, however, will bear on project progress. Uncertainties therefore need to exist closely tracked by both analysts and solution architects to ensure their timely resolution simply prior to development inside sprints.

2 aspects of our active framework ensure that design activity to resolve uncertainties is completed at the "last responsible moment" prior to development: a design plan, and a "await ahead" meeting.

With respect to the pattern program, we found that when an uncertainty is identified that volition require some pregnant impact analysis, compages and design action to be completed, the estimated time needed to resolve that incertitude should be annotated to the relevant story. As with relative estimation, this is not intended to be accurate or fully researched, but gives a view of the optimum timescales for architecture and design activeness to starting time for that story. For example, in the Travel application example described earlier, the compages and blueprint activity needed to hold that the finish-to-end solution for online documentation required four weeks (ii sprints). This information was attached to the online documentation story, conspicuously visible in the production backlog.

With respect to a "expect ahead" meeting, we institute that holding an boosted meeting midway between the sprint kickoff and sprint review provides an opportunity for team members—particularly those focusing on design activity—to "await ahead" to future stories in the prioritized backlog and identify any actions that are needed, such equally the post-obit:

To resolve any remaining significant uncertainties for the stories likely to be included in the next sprint. The definition of "pregnant" in this context is anything that would cause unacceptable filibuster to sprint progress if it were resolved in-sprint.

To resolve any pregnant uncertainties for future stories every bit indicated by the pb times previously assigned to stories. For example, if the project is currently working on sprint 4, and a story that is likely to be included in sprint vi has some architecture and pattern action that is believed to need two sprints elapse time to complete, then that activity must be started immediately.

In this way, architecture and pattern activity continues throughout the life of the projection delivery, both in-sprint and ahead of sprints, allowing the flow of valuable software commitment to continue smoothly without intermission or unnecessary delay.

Our active framework, depicted in Figure 15.1, illustrates the aspects that are fundamental to architecture and blueprint action:

Figure fifteen.ane. Aviva Great britain active framework, highlighting the architecture and design action.

Initiation, which includes sufficient up-forepart architecture and design;

Uncertainty management (design plan), which ensures that ongoing emergent architecture and design action is completed at the "last responsible moment";

The "look alee," which ensures sufficient focus on emergent compages and blueprint activity beyond the current dart.

Emergent compages and design skills

Solution architects who are familiar with using a waterfall approach, and are therefore used to creating the complete high-level design up forepart, demand to develop a different skill prepare. They need to both sympathise how much compages and blueprint activity to consummate during initiation, and as well develop the skills needed to evolve the architecture and design in response to changing requirements during the sprints. The cultural modify here should not be underestimated, as this is a very unlike technique and requires a unlike approach to chance, complexity and ambiguity, too as to the principles of architecture and design. Information technology also requires a significantly more collaborative approach, working with the business analysts, developers and testers throughout the projection lifecycle. Supporting the solution compages community through this alter with training and awareness exercises is required to ensure success. In Aviva Great britain, nosotros have used a number of methods to provide this support, including preparation, the utilise of skilled external resources, and the creation of role-specific peer-support groups.

Architecture and design documentation

Not least among the challenges facing solution architects and designers who are accustomed to working in a waterfall delivery lifecycle is the question of compages and design documentation. Using the waterfall approach, the documentation need is quite clear: to document the loftier level solution compages and design of the cease-to-end solution and provide sufficient information to allow the creation of detailed application designs, and afterward, the build process for the applications. In an active project, however, there is far greater use of exact advice and collaboration to align application designers and developers, so that architecture and blueprint documentation can, and should, be kept to a minimum. Nosotros have not yet finalized our guidelines in this expanse, but broadly recommend that architecture and pattern documentation should be sufficient to ensure a common understanding of the agreed scope, context and high-level component architecture of the solution. It must also exist sufficient to enable future application changes to be effective. The documentation must be curtailed and piece of cake to change, so diagrams are ideal, and indeed will frequently be sufficient.

Designing for change

The ability to design in such a mode every bit to facilitate future change is another core skill that is essential for success. Taking an agile approach by allowing requirements to emerge will in itself help to drive this behavior and develop the appropriate skills.

For example, experienced software designers at Aviva UK often state that "…in society to develop a good service, I need to know all the requirements upward forepart." Taken to its logical conclusion, this ways that any time to come change to the service is expected to be difficult. With such a high expectation for modify and adaptability in today's environs, this is not a practiced mindset, and leads to inefficient behaviors. Instead, by forcing change during the evolution of the service through emergent requirements, the chances of hands incorporating hereafter change are significantly increased.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780124077720000149

Computational Logic

Pascal Hitzler , ... Axel Polleres , in Handbook of the History of Logic, 2014

five Particular Challenges to Using Logic-Based Knowledge Representation on the Web

The utilize of logic-based cognition representation and reasoning at the scale of the World wide web poses a number of particular challenges which accept so far non received principal attention in logic research. Nosotros list some of them in the following.

The added value of a good machine-processable syntax for knowledge representation formalisms is easily underestimated. However, it is a primal basis for cognition exchange and integration which needs to be approached carefully in order to obtain a widest possible agreement between stakeholders. The Earth Wide Web Consortium has gone a long style in establishing recommended standards for noesis representation for the Semantic Web, in particular through their work on RDF [Lassila and Swick, 2004; Cyganiak and Woods, 2013], OWL [Smith et al., 2004; Hitzler et al., 2012], and RIF [Boley et al., 2013; Boley and Kifer, 2013a], but also by establishing special-purpose shared vocabularies based on these, e.thou. SKOS Simple Cognition Arrangement System [Miles and Bechhofer, 2009], SSN Semantic Sensor Networks [Compton et al., 2012], provenance [Groth and Moreau, 2010].

Investigating the scalability of automatic reasoning approaches is, of grade, an established line of inquiry in computer science. Nevertheless, dealing with Spider web calibration information lifts this issue to still some other level. Shared memory parallelization of reasoning is highly constructive [Kazakov et al., 2011], however it breaks downwardly if data size exceeds capacities. Massive distributed memory parallelization has started to be investigated [Mutharaju et al., 2013; Schlicht and Stuckenschmidt, 2010; Urbani et al., 2011; Urbani et al., 2012; Weaver and Hendler, 2009; Zhou et al., 2012], just there is as nonetheless insufficient data for casting a verdict if distributed memory reasoning volition be able to encounter this challenge. Some authors even telephone call for the investigation of non-deductive methods, e.g. borrowed from auto learning or information mining, equally a partial replacement for deductive approaches [Hitzler and van Harmelen, 2010].

Automated reasoning applications usually rely on clean, single-purpose, and usually manually created or curated cognition bases. In a Spider web setting, yet, information technology would often be an unrealistic assumption that such input would be bachelor, or would be of sufficiently pocket-sized book to brand transmission curation a feasible approach. In some cases, this problem may be reduced by crowdsourcing data curation [Acosta et al., 2013]. Nevertheless, on the Web we should expect high-volume or high-throughput input which at the same fourth dimension is multi-authored, multi-purposed, context-dependent, contains errors and omissions, and then forth [Hitzler and van Harmelen, 2010; Janowicz and Hitzler, 2012]. The aspects just mentioned are often refered to as the volume (size of input data), velocity (speed of data generation) and variety aspects of data, in fact these three V's are commonly discussed within the much larger Big Information context, within which many Semantic Web challenges can exist located [Hitzler and Janowicz, 2013].

To give only one example of variety which is particularly challenging in a Semantic Web context, consider basic geographical notions such as wood, river, or village, which depend heavily on social understanding and tradition, and are furthermore oftentimes influenced by economic or political incentives [Janowicz and Hitzler, 2012]. This type of variety is oftentimes refered to every bit semantic heterogeneity, and it cannot be overcome by merely enforcing a unmarried definition: In fact, the dissimilar perspectives are oftentimes incompatible and would result in logical inconsistencies if combined. Research on the quesion how to bargain with semantic heterogeneity may, of course, exist more a question of pragmatics than of formal logic, nevertheless the body of literature dealing with this outcome is still too small to confidently locate major promising approaches. Formal logical approaches which accept been proposed as fractional solutions include fuzzy or probabilistic logics [Klinov and Parsia, 2008; Lukasiewicz and Straccia, 2009; Straccia, 2001], paraconsistent reasoning [Maier et al., 2013], and the use of defaults or other non-monotonic logics related to reasoning with knowledge and belief [Baader and Hollunder, 1995; Bonatti et al., 2009; Donini et al., 2002; Eiter et al., 2008a; Knorr et al., 2011; Motik and Rosati, 2010; Sengupta et al., 2011], merely the larger issue remains unresolved. Others have advocated the employ of ontology blueprint patterns for meeting the challenge of semantic heterogeneity [Gangemi, 2005; Janowicz and Hitzler, 2012], merely it is currently not clear how far this will deport.

There exist a multitude of different knowledge representation paradigms based on different and often incompatible design principles. Logical features which appear useful for modeling such as uncertainty handling or autoepistemic introspection are often studied in isolation, while the loftier-variety setting of the Semantic Web would suggest that combinations of features need to be taken into account in realistic settings. However, merging different knowledge representation paradigms often results in unwieldy, highly complex logics for which stiff automatic reasoning support may be difficult to obtain [de Bruijn et al., 2011; de Bruijn et al., 2010; Knorr et al., 2012]. Fifty-fifty W3C recommended standards, which on purpose are designed to exist of express multifariousness, expose this issue. The OWL 2   DL profile is essentially a traditional description logic, simply if serialised in RDF (equally required by the standard), the RDF formal semantics is not equivalent to the OWL formal semantics, and the sets of logical consequences defined past these two formal semantics for an OWL file (serialised in RDF) are non contained within each other. The OWL 2 Full profile was established to encompass all of both OWL 2   DL and RDF Schema, but we are non enlightened of whatsoever practical utilize of its formal semantics. Concerning the human relationship between OWL and RIF, in contrast, the gap seems to be closing now, as discussed in Department 6 below.

Another practical obstacle to the use of formal logic and reasoning on the Semantic Web is the availability of strong and intuitive tools and interfaces, of industrial strength, which would relieve application developers from the burden of condign an expert in formal logic and Semantic Spider web technologies. Of course, many useful tools are available, eastward.g. [Lehmann, 2009; Calvanese et al., 2011; David et al., 2011; Horridge and Bechhofer, 2011; Tudorache et al., 2013], and some of them are of excellent quality, but a significant gap remains to meet applied requirements.

Read total affiliate

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780444516244500162

Advances in Geographic Object-Based Epitome Analysis with ontologies: A review of main contributions and limitations from a remote sensing perspective

Damien Arvor , ... Marie-Angélique Laporte , in ISPRS Journal of Photogrammetry and Remote Sensing, 2013

4.1.ane Building ontologies

To ensure knowledge integration among various scientific domains and subsequently ease knowledge sharing, a highly recommended practiced practice in ontology building consists of identifying existing ontologies that could be reused, interconnected or extended by the remote sensing community ( Pinto and Martins, 2004; Suarez-Figueroa and Gomez-Perez, 2009). Considering the focus in GEOBIA is on Earth Observation, nosotros might consider the integration of ontologies referring to earth and surroundings terminologies and to scientific observation. As an instance, the Semantic Web for Earth and Environment Terminology (SWEET) ontology (Raskin and Pan, 2005) might exist used to consider major concepts used in earth and environment studies. The Extensible Observation Ontology (OBOE) (Madin et al., 2007) or Observations & Measurements (O&M) ontology (OGC, 2005) that provide a generic design to structure scientific observation and respective measurements might also exist of major interest. It is noteworthy that observation ontologies might be particularly relevant for GEOBIA applications because they include the possibility of describing the context of an ascertainment, i.e., spatial relationships (eastward.g., a tree is observed in a field), temporal relationships (e.g., the ascertainment of bare soil after a woods was observed means that the object refers to a deforested surface area) or sensing conditions (e.1000., a landscape in the Brazilian Amazon is observed on a SPOT5 image in a sure context: acquisition date, sun elevation bending, atmospheric weather condition, processing level).

Such ontologies should then exist considered every bit a framework to build more specific ontologies linked to the domain of GEOBIA. Interesting ongoing efforts that should exist mentioned in this domain address ontologies related to the geographic domain (Torres et al., 2011), land cover description (Corine land cover, LCCS) (meet http://world wide web.glcn.org/ont_0_en.jsp, http://harmonisa.uni-klu.ac.at/content/land-use-state-cover-ontologies), epitome description (Câmara et al., 2001; Quintero et al., 2009), sensor description (see OntoSensor from (Russomanno et al., 2005) and the Semantic Sensor Net Ontology, http://www.w3.org/2005/Incubator/ssn/wiki/Semantic_Sensor_Net_Ontology), spatio-temporal relationships (Bittner et al., 2009) and image processing methods and tools (Nouvel and Dalle, 2002).

Withal, the construction (or extension) of such ontologies presents serious conceptual and technical problems that need to be antiseptic. The main conceptual problems refer to classic bug that are faced when building ontologies. First, it is necessary to reach an ontological commitment to build neutral ontologies, including clearly describing concepts (Agarwal, 2005; Guarino, 1998; Marker et al., 2005). Achieving articulate definitions is a difficult task because no consensus exists in a field regarding concepts and relationships to include in ontologies (Bard and Rhee, 2004). Indeed, bringing a group of users, domain experts and ontology engineers to agreement on abstract concepts is a key obstruction (Janowicz, 2012). Equally a result, Janowicz questions the efficiency of traditional approaches for building ontologies, i.e., a superlative-down arroyo based on the extension of upper-level ontologies. In dissimilarity, he argues that in the historic period of data-intensive science, lesser-upwardly ontology edifice might be more adapted for reporting the diversity of viewpoints in a domain and the heterogeneity of data. Thus, ontology primitives could be derived from the data, and ontology building could use or reuse an ontology design blueprint (ODP) to align the bottom ontologies using ontology mapping techniques (Janowicz, 2012).

Read total article

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/S092427161300124X