Objectives
To achieve its objectives, the project will analyze, specify, develop, and validate a complete end-to-end production chain which constitutes significant innovation over the state-of-the art in major technological areas of work:
(1) development of innovative content representation formats with an emphasis on scalable content management techniques.
(2) design of an integrated customization and personalization framework able to deal with complex multimedia content formats and metadata.
(3) demonstration of the end-to-end platform accessing the content over various networks including fixed and wireless networks as well as fixed and mobile terminals.
Innovation areas
A] Scalable Representation Techniques
Intelligent and interoperable services require a representation of the multimedia content that allows effortless scalability of content according to the constraints set by the network, the end device and the user profile.
B] Integrated customization and personalization framework
Customization is the process of adapting content to constraints coming from the transport or terminal
capabilities at the moment of viewing, including access rights management. Personalization is the process of adapting content to the request of the viewer or his/her stored preferences. Personalization and customization can be built upon the same technologies.
The core innovation of ISIS is the specification and development of an integrated customization and personalization framework.
Technology
A] Personalization
The idea behind personalization technologies is that of helping users during their interaction with complex services when for example they want to find information items from a large database of available content. The greatest benefit they provide is the ability to make a complex MultiMedia service easy to use, by presenting only the content that a given user wants to see, in a way well suited to its habits, and at the appropriate time. In different words, personalization techniques allow the delivery of content and services actively tailored to individuals based on rich knowledge about their preferences and behavior. As personalization technologies allow modifying what is presented to each user, they are usually tightly integrated into the presentation components of the overall architecture. With personalization technologies working, two individuals simultaneously accessing a same service could see completely different services and content.
The primary benefit of personalization is a better relationship with each user. Things like name recognition, convenience, and superior service are known factors in customer retention and a well-integrated personalization solution can provide all of these benefits.
A good example could be a tourist site or application on which the personalization infrastructure tracks a customer's browsing activity and remembers what kind of tourist items the customer has chosen, selected, examined. All that information, possibly combined with other aggregate data, allows the system to recommend tourist items the customer is likely to enjoy. As a matter of fact, the number of items in the site could be so big that we can not assume the user will see his or her way all interesting items. So, recommending specific items increases the chances the customer has of finding items that he will like.
This type of personalization is referred to as personalized-recommendation and is a kind of automatic adaptation of a service to user preferences. The primary use of recommended systems is in electronic commerce where they allow the site to suggest products to potential customers. In marketing parlance, personalization delivers on the following objectives:
* to convert more browsers into buyers;
* to increase the average order size with cross-sell/up-sell recommendations; and
* to increase the frequency of purchases by promoting customer loyalty through learned customer preferences. The main objective is to conquer the user and retain the customer, especially in a world like the web where a competitor is only "a click away".
Recommender systems can be also used in several other contexts in which informed suggestions could represent a useful personalization strategy.
Technically speaking, personalization techniques propose algorithms that starting from the on-line and off-line observation of user behavior are able to predict user preferences related to the services at disposal and accordingly propose services and contents considered of interest for the user.
In the case of the ISIS tourist planet, personalization requires the dynamic generation of an MPEG-4 catalogue (content delivery pages), implemented through an application server that queries the Content Description database for the content to insert in page templates following the instructions of personalization algorithms. To relate content and user profiles, content has to be tagged with metadata providing a semantic description of stored tourist information. Also contextual information such as the time of the day, the time spent reading a piece of content etc., can be fruitfully used by the personalization algorithms.
In addition to the already available and commercial tools for the personalization services, the ISIS platform realizes a new type of architecture that permits to join the server-side and the client-side user modeling engines with special regard to the users’ privacy aspects.User modeling on the server side is actually exposed to possible concerns in terms of privacy and security. Generally, it requires the user to have a high degree of confidence in the prospected use of his or her own personal data by the service provider that maintains the server. Another drawback is that the user models stored at the server may not be specific for each single user, but rather correspond to aggregate profiles regarding a plurality of users. Oftentimes, for each service there is a fixed, limited set of pre-defined user models among which the user has to choose when registering.
B] Context-awareness
With the ever more rapid integration of mobile technology into our everyday lives we are also seeing the development of new applications to complement it. Mobile devices are no longer seen simply as a communication device, but are taking on the role of versatile, networked terminals enabling users to communicate multimodally as well as access Internet services, play games and access a wide range of multimedia content. However, there is a big difference with respect to traditional PC based interaction in that it is no longer clear where, when and how users’ will need to make use of such services. A variety of situational factors can play a role in how the services should be provided. There is, therefore, more need than ever before for context aware services and applications
Terrain Navigation : Due to the very large amount of data needed to represent such scenes, a scalable streaming format is needed. This format has to be associated with some transmission scenarios exploiting it fully. Indeed, the format developed in ISIS relies on geometric wavelets, coupled with a geometric adaptation of the classical SPIHT zerotree-compression. Though it provides very good compression and is naturally prone to adaptive decoding, the detail selection / navigation scenario has to be specified in a context of real-time client / server dialog in order to play "3D on demand" and navigate freely while the data are streamed.
Figure a - Textured terrain during flyover
In practice, the end user should be able to select a terrain database and, within a moderate time lap, flyover the terrain with no waiting time for transmission or loading. This is made possible by the selective transmission of the refinements needed by the current navigation. While the observer is viewing a particular location, the client requests the needed data that will refine this location and setup the neighboring areas for future visualization. This concept is illustrated by Figure b.
Figure b - View from above of a virtual navigation over a terrain. The red sector represents the actually viewed scene. The blue area is composed of the areas in which new data are requested
The overall behavior of both client and server is as follows: during navigation, the client maintains a cache with the information already transmitted. When a new location has to be refined, it sends its position and orientation in the scene to the server, along with compact information of its cache content. The server determines the local data and corresponding precision, and sends them to the client. This exchange is repeated periodically.
# Benefits of the wavelet 3D mesh format : View-dependency : wavelets are well located in space, thus allowing a geometric sorting of the coefficients according to the visibility of the portion of the mesh they refine.
# Sub-band decomposition : partial reconstruction can be achieved by using a subset of the whole wavelet coefficients set, instead of all of it.
# Hierarchical structure : the wavelet decomposition is based on canonical facet subdivision, which is a simple hierarchical structure that can be used for multiresolution animation or editing.
# Compression : because of this hierarchical organization, only the geometric component has to be encoded. Moreover, the surface variations are decorrelated by the wavelet transform, leading to one of the best known results in terms of lossy mesh compression.
# Bitstream embedding : the SPIHT algorithm provides by nature bitplane refinements of wavelet coefficients.
C] MPEG-21 with special focus on MPEG-21 Digital Item Adaptation
* Level of Articulations for Virtual Humans : this technology enables the adaptation of animation for 3D virtual humans (regarding the number of animated joints and points) within MPEG-21 Digital Item Adaptation.
* Terminal capabilities, benchmarking : this features allow the adaptation engine to take into account the terminal characteristics and performances for a particular content
D] MPEG-4 with special focus on :
* MPEG-4 Scalable Video Codec :This recently started standardisation activity within MPEG targets to provide a fully scalable video codec to support temporal, spatial and quality scaling of a video stream. The competing proposed codecs are all based on wavelet technology. Due to the wavelet representation, which is used in the spatial as well as the temporal domain, the codec is inherently scalable along the spatial and temporal dimensions. However, the levels are restricted to powers of two.
Within ISIS a first optimization of the decoder code has been developed in order to support Wavelet 2D+t video coding in a complete end to end streaming chain. As a result, one of the first real-time decoder implementations at least for QCIF sequences at 25 fps has been developed. Currently, decoding of CIF images can be achieved at least at 12,5 fps on high-end PCs (2,8 GHz).
* The coding system divides consecutive frames into groups, similar to the Group of Pictures (GOP) structure in MPEG-1/2/4. Each GOP has a typical size of 16 frames. The motion compensated (MC) filtering is performed on pairs of frames to produce (at level 1) temporal low- (t-L) and high-pass filtered (t-H) frames using the Haar wavelet. The temporal low frames are then decomposed again with the same MC filtering method. Continuing to level 4, the octave based five-band temporal decomposition is generated. In this case, we have 1 t-LLLL frame, 1 t-HHHH frame, 2 t-HHH frames, 4 t-HH frames, and 8 t-H frames per GOP (=16 frames here) and the associated motion vector fields MV(0)... MV(3).
* MPEG-4 Audio BSAC : In the MPEG-4 Audio standard ( Internet Audio Working Profile) there exists the so called AAC-BSAC. The Bit Sliced Arithmetic Coding allows scaling of the bitstream in steps of 1kbit/s down to a certain base layer (typically at 16 kbit/s per channel), therefore it is the only 'fine granular' scalable codec within MPEG-4 Audio. Base and enhancement layers plus some additional header information form an elementary stream that is presented below.
The bitrate adaptation in case of BSAC elementary streams is realized as truncation of the 48 enhancement layers. The base layers are not changed in terms of adjusting their length, but they include a parameter framelength, which needs to be updated during the bitstream transformation since it provides information necessary for the decoding process. The BSAC elementary stream starts with a BSAC header, which remains unchanged in the adaptation of the bitstream. The customization of such a bitstream can even be done in streaming scenarios frame by frame, since this unit is the minimum block a decoder needs to reconstruct the audio signal.
BSAC solves the problem of exhaustive processing needed when transcoding of audio streams nowadays takes place. There is no need for decoding and again encoding with different parameters, since the scalability is based on the nature of the BSAC stream. In order to deliver an adapted stream, just a few blocks of the bitstream are truncated, while others are updated. This very processing effective operation can be done on demand, just in the moment of the stream delivery as demonstrated within the ISIS project. The remarkable project results of ISIS contain also BSAC encoder, bitstream syntax description generator, realtime decoders for PC and PDA.
* Scalable body and facial animation : Based on MPEG-4 BBA, H-Anim skeleton, MPEG-4 FAP and FAT, scalable body and facial animation enables the rendering of 3D animated virtual humans on different devices, such as standard desktop PC but also laptop and personal digital assistants.
Different Levels of Detail
To adapt 3D representations, several tools have been developed to allow the authoring of MPEG-4 compatible models that are usable as input in the adaptation engine. The generated models support adapted facial and body animation
. Different FAP levels
* MPEG-4 Systems : adaptive 2D graphics :
* Wavelet 3D Mesh : The purpose of the 3DMesh scalable coding tool is to allow immersive virtual navigation in large scenes representing textured geographic data on various terminals. Due to the very large amount of data needed to represent such scenes, a scalable streaming format is needed. This format has to be associated with some transmission scenarios exploiting it fully. Geometric Wavelets, for 3D meshes for the terrain fly-through application in ISIS provide view-dependency, sub-band decomposition, a hierarchical struction, high compression and bitstream embedding. ( more... )
Workpackage structure
In a nutshell, the ISIS project aims at analyzing, specifying, developing, and validating the technologies of a complete end-to-end chain that exploits the innovative opportunities offered by new services based on scalable, customizable, personalizable content.
The objective of offering to users the optimum perceived quality, taking into account the nature of the content (audio, video and graphics) as well as the network, terminal, and user constraints, will challenge the project.
These objectives are worked out in a certain number of logical phases and project activities :
- Technology evaluation to gather the basis and assess the state-of-the-art
- Platform System architecture design and global system specification
- Tool development for the different elements of the technical covered by the Project
Specification and Implementation of scalable audio, video and graphics codecs
Specification and implementation of a content customization personalization and delivery framework
Specification and implementation of content consumption tools (2D and 3D players) for end user devices
- Demonstrator build-up and integration
- Prototype Application development
- System and Prototype application validation
- Dissemination of results
Some of these phases are very closely interrelated. They are therefore grouped into the same workpackage in order to simplify the communication between the partners and within the project.
The project is therefore organized into 5 workpackages (WP) whose inter-relationship is shown in figure 4.
The figure shows also the flow of information within the project.
figure 4: Workpackage structure and interrelationship
WP1 is responsible for the technical and administrative management of the project as well as for the dissemination of results;WP2 is responsible for designing the overall architecture of the system and validating the technologies developed in the other WPs.WP3 to WP5, finally, are responsible for the analysis, specification, and implementation of the technologies.
ISIS project: Navigation systems
This project is carried out by Division of Communication Systems and Division of Automatic Control in cooperation with SAAB (Dynamics and Gripen).
Selected publications
Since this project focuses on managing master thesis projects, no PhD student is involved and hence there are no publications. For navigation publications, we refer to sensor fusion publications that all concern navigation problems in airborn, automotive and underwater systems.
Background
In the recent years activities in the navigation field have been boosting. There are and will be more and more areas where navigation becomes an important part of a system solution. So far navigation systems have been of major importance for aircraft, missiles, ships, etc., but it is now becoming used in automobiles and other smaller systems. Even in aircraft systems the advance in computer capacity has opened the possibility for a broad spectrum of applications where accurate navigation plays an important role. Typical tools today for navigation are inertial navigation systems (INS), which essentially means that acceleration and angular velocity measurements are integrated to a position. Systems that have an increasing importance are Global Navigation Satellite Systems (GNSS), e.g. the Global Positioning System (GPS), which are based on satellite information. The use of maps and terrain information (on charts or databases) has recently become a competitive component in navigation systems. Other sensors are more traditional sensors as for instance distance measuring equipment that can measure distances to stations at known locations. A good navigation system should be capable of integrating the information from different sources in an efficient and reliable way. In an aircraft application the requirements put on a navigation system varies with respect to reliability depending on what the objective is at a certain moment. One objective is a challenge from a reliability point of view - the landing of an aircraft. Originally, the main areas in this project were:
* Configuring the sensor data fusion algorithm
* Detection of failures and integrity monitoring in navigation systems
* Terrain database related applications
The licentiate thesis below focused on the second item:
* J. Palmqvist. On integrity monitoring of integrated navigation systems. Linköping Studies in Science and Technology. Thesis No 600, 1997.
As a continuation, Per-Johan Nordlund is studying a general framework attacking all three problems in his sensor fusion project. This navigation project is so to say resting, except for master thesis, and the research is transfered to the project sensor fusion.
Master Theses
Today, the activity centers around an extensive master thesis program. The block diagram below, and the reference list describe how over 50 theses fit into the general picture. There is a flight animation illustrating one of the recent thesis about a ground collision avoidance system. The movie was generated inside matlab directly from simulation data and the geographical information system (GIS). The animation starts at low altitude over the lake Tåkern (50km from Linköping), goes to the hill Omberg and just before the ground collision is unavoidable, the system automatically steers over the hill and out to the lake Vättern.
Several of these theses have received awards:
# [25] The Polhem prize 1997 for best master thesis in Sweden.
# [10] Radionavigationsnämndens 1'st prize 1997.
# [17] Radionavigationsnämndens 2'nd prize 1997.
# [37] Radionavigationsnämndens 1'st prize 1999.
# [33] Radionavigationsnämndens 2'nd prize 1999.
# [56] Radionavigationsnämndens 1'st prize 2001.
# [52] Radionavigationsnämndens 2'nd prize 2001.
Another aspect is that these people have been a useful source of recruitment, and the following students got employed at SAAB directly after finishing their theses: [2,6,9,15,17,20,22,25(both students),32,37,39,39]
Project status
There is no PhD student in the project at the moment, but several of the PhD students in sensor fusion are working with navigation problems. The project is neveheless very active due to a large number of master thesis at Saab military aircraft, Saab dynamic and also at other "non-ISIS" companies as FOA (national defence research institute), Celsius aerotech and Luftfartsverket.
References:
1. Niclas Bergman. Recursive Bayesian Estimation: navigation and tracking applications. PhD thesis 579, 1999.
2. Jan Palmqvist. On integrity monitoring of integrated navigation systems. Licentiate thesis 600, 1997.
3. Niclas Bergman. Bayesian inference in terrain navigation. Licentiate thesis 649, 1997.
4. M. Lundberg. State estimation from flight test data with Extendend Kalman filtering and smoothing. Master Thesis LiTH-ISY-EX-1637, 951212.
5. P. Johansson. Target Tracking by fusing Radar and IR-sensor data. Master Thesis LiTH-ISY-EX-1607, 951220.
6. J. Carlbom. A Study of Inertial Navigation Vertical Loops for Military Aircrafts. Master Thesis LiTH-ISY-EX-1567, 950925.
7. N. Bergman. Parallel filters for detection of aircraft manoeuvres. Master Thesis LiTH-ISY-EX-1598, 951006.
8. C. Nilsson. Knowledge Based Specification of a Pilot Model for Complex Tasks. Master Thesis LiTH-ISY-EX-1673, 951220.
9. B. G. Sundqvist. Simulator of JAS throttle control system, a mode changing system. Master Thesis LiTH-ISY-EX-1566, 951030.
10. F. Gunnarsson. Parallel filters for terrain reference navigation. Master Thesis LiTH-ISY-EX-1741, 960911.
11. P. Carlsson. Modelling and simulation of GPS. Master Thesis LiTH-ISY-EX-1699, 961115.
12. E. Johansson. Integration methods for acceleration signals. Master Thesis LiTH-ISY-EX-1676, 960125.
13. F. Jonsson. Dynamic performance and control methods for primary flight control with servopump. Master Thesis LiTH-ISY-EX-1661, 960918.
14. R. Karlsson. Optimal trajectory generation for Taurus. Master Thesis LiTH-ISY-EX-1714, 961216.
15. A. Malmberg. Track-to-track association in multisensor multitarget tracking. Master Thesis LiTH-ISY-EX-1705, 961129.
16. P. Norberg. Target Tracking with Decentralized Kalman Filters. Master Thesis LiTH-ISY-EX-1575, 960429.
17. P.-J. Nordlund and J. Sonesson. An environment for simulation of global positioning system (GPS) and a method for integration of inertial navigation system and GPS. Master Thesis LiTH-ISY-EX-1728, 961213.
18. M. Stern. Suggestion for an altitude meter for an autonomous miniature helicopter. Master Thesis LiTH-ISY-EX-1603, 960209.
19. Dan Rylander. Robust Control of a Hypersonic Vehicle using Gain Scheduling. Master Thesis LiTH-ISY-EX-1546.
20. Håkan Frank. Robust Control of a Hypersonic Vehicle Using Dynamic Inversion and mu-synthesis. Master Thesis LiTH-ISY-EX-1681.
21. Magnus Andersson. Graceful degradation of an integrated navigation system and improvement of an INS simulator. Master Thesis LiTH-ISY-EX-1834, 971217.
22. Mats Bergman. Filtering of radar measurements for aiming the automatic gun in JAS 39 Gripen. Master Thesis LiTH-ISY-EX-1826, 971121.
23. Johan Hedin. Specific emitter identification of radar. Master Thesis LiTH-ISY-EX-1875, 971202.
24. Lars Holmberger. Dimensioning of a real-time simulation environment for the BILL-sight. Master Thesis LiTH-ISY-EX-1810, 970930.
25. Thomas Jensen and Mathias Karlsson. Sensor management in a complex multisensor environment. Master Thesis LiTH-ISY-EX-1831, 971128.
26. Nicklas Johansson. H-infinity filtering. Master Thesis LiTH-ISY-EX-1791, 971216.
27. Tomas Larsson. Control law design for high alpha vehicle using linear quadratic optimization. Master Thesis LiTH-ISY-EX-1748, 970311.
28. Roine Pettersson. Position determination with GPS. Master Thesis LiTH-ISY-EX-1760, 970603.
29. Jan Wallenberg. Extending the TACSI tactical air combat simulator with Monte Carlo Simulation. Master Thesis LiTH-ISY-EX-1768, 970214.
30. Johannes Wintenby. Target tracking using IMM, JPDA and adaptive sensor updating. Master Thesis LiTH-ISY-EX-1747, 970418.
31. Håkan Wissman. Control stick steered aircraft simulation model. Master Thesis LiTH-ISY-EX-1812, 971215.
32. Daniel Murdin. Data fusion and fault detection in decentralized navigation systems. Master Thesis LiTH-ISY-EX-1920, 980512.
33. Charlotte Dahlgren. Non-linear black box modelling of JAS 39 Gripen's radar altimeter. Master Thesis LiTH-ISY-EX-1958, 981023.
34. Niklas Ferm. Identity fusion and classification of aircraft in a multisensor environment. Master Thesis LiTH-ISY-EX-1985, 981204.
35. Robert Guricke. Simplified models of sensor inaccuracy, data fusion and tracking in an air combat simulator. Master Thesis LiTH-ISY-EX-1903, 981124.
36. Ingela Lind. A design method for H-infinity optimal filtering. Master Thesis LiTH-ISY-EX-1945, 981005.
37. Martin Pettersson. Distributed integrity monitoring of differential GPS corrections. Master Thesis LiTH-ISY-EX-2021, 981210.
38. Mattias Svensson. Aircraft trajectory restoration by integration of inertial measurements and GPS. Master Thesis LiTH-ISY-EX-2021, 991026. SAAB.
39. Jan-Ole Jacobsen. Controller Design of a Generic Aeroplane Model using LFT Gain Scheduling. Master Thesis LiTH-ISY-EX-2059.
40. Stefan Ahlqvist. SAAB 1999.
41. Karolina Danielsson. Error modeling in navigation systems. SAAB 1999.
42. Björn Hässler. Transmission model for IRST SAAB 1999.
43. Mats Hallingström. Modeling data traffic in onboard computer. SAAB 1999.
44. Josefin Hovlind. Measurement conditioning from camera surveyed weapon delivery. LiTH-ISY-EX-3071, SAAB 2000.
(resources up to 75)
ISISIntercultural Strategies for International Success
Westover Air Reserve Base, Chicopee, Massachusetts
* Largest Air Reserve Base in the world.
* Operates the C-5A Galaxy, one of the largest military transport planes in the world. The air pollution generated by one single C-5A Galaxy is enormous.
* Contamination problems include solvents, "general base refuse," jet fuels, degreasers, pesticides, herbicides, ashes from heat plant, oils.
* Waste disposal and landfill records poorly kept or unavailable.
* Vast stretches of land contaminated with Jet Fuel (JP-4) used for training and jet engine testing purposes.
* There have been over 20 sites in the Installation Restoration Program since 1994.
* Surface water contamination includes strong evidence of deicing compounds in Cooley Brook, which discharges into the Chicopee Reservoir, a popular recreational swimming spot.
* Groundwater pollution by contaminants leaching through the ground from landfills, waste disposal sites, and fire training areas. Many residents of the base who draw their water from wells are at risk of being exposed to contaminants in the groundwater.
* Efforts to monitor contaminants from a hazardous landfill are hampered by a poor understanding of the complex underground groundwater system. Westover consistently collects faulty and/or suspect data that does not support a conclusive understanding of this system. Westover's groundwater monitoring well database is unreliable and poorly organized. It is incomplete, contains errors, and disorganized.
Project Synopsis
This representation and analysis project is investigating the combination of model-integrated computing and aspect-oriented programming composition technologies to develop 1) a domain-specific, graphical language that captures the functional design of real-time embedded systems, 2) a weaving process that maps high-level invariant properties and system requirements to design constraints affecting specific program regions, and 3) a generation process that customizes components and composes real-time embedded systems. These technologies and tools are being demonstrated in the Boeing Bold Stroke OEP, where they are used to automatically initialize and configure avionics mission computing components that are customized for particular mission needs, and in the BBN UAV OEP, where the tools are used to synthesize CDL/QDL specifications, customizing the quality adaptive behavior of multi-media streaming applications. If successful, this project will help produce domain-specific and even application-specific modeling tools that will enable DoD systems engineers to configure, analyze, and validate complex real-time embedded systems in a more intuitive manner.
Project Description
The main goal of this project is to demonstrate the synergy of model-integrated computing (MIC) with the ideas from Aspect-Oriented Programming (AOP). In particular, the concepts of aspect-orientation will be applied at a higher level of abstraction - at the modeling level. An additional goal is to develop a framework for building weavers.
The motivation for the project stems from difficulties encountered when trying to specify the constraints for a model-based system. It became apparent that constraints cross-cut the model. A solution that isolates the constraints as a separate area of concern will improve the manageability of such models.
Challenges
Under a prior project, we created a development environment for specification, simulation, and synthesis of dynamic, real-time embedded systems. A system's requirements, algorithms, resources, and behavior is captured as a 'design-space,' representing a multitude of alternative system implementations. An integrated language allows specification of design constraints as components within a set of hierarchical, multiple aspect models.
Figure 1
A drawback of our current approach can be seen in the sprinkling of many constraints throughout various levels of our model. It is difficult to maintain and reason about the effect and purpose of constraints when they are scattered about. Managing these distributed constraints becomes extremely difficult as system sizes increase (see Figure 1, where models are represented by the black circles and the replicated constraints are red blocks). Also, experimenting with different architectures, synchronization methods, network protocols, etc., becomes error-prone and labor intensive. We need a mechanism to separate out this concern. Our goal is to be able to specify global constraints at a particular level in the model and have that constraint automatically propagated down to the sub-objects (in this case a sub-object represents aggregation, not inheritance).
Approach
Handling Crosscutting Constraints
To solve these problems, we require a consistent, powerful way of specifying constraints and automatically weaving these into the target system designs. Aspect-oriented approaches can be applied to create system-wide constraint languages and methods for distributing and applying user-defined, embedded system, real-time constraints.
The manner in which our weaver is used is illustrated in Figure 2. The GME can export the contents of a model in the form of an XML document. In our former approach, the generated XML would be tangled with constraints throughout the document. Under our new approach, however, it may be quite possible that the exported XML is void of any constraint. The weaver is assigned the job os spreading the constraints throughout the model.
In the figure below, the input to the domain specific weaver consists of the XML representation of the model, as well as a set of specification aspects provided by the modeler. Specification aspects are similar to pointcuts in AspectJ in the sense that the define specific locations in the model where a particular constraint is to be applied. The output of the weaving process is a new description of the model in XML. This enhanced model, though, contains new constraints that have been integrated throughout the model by the weaver.
Figure 2
The requirements for our new approach necessitate a different type of weaver from those that others have constructed in the past. A new weaver is desired that will allow propagation of constraints to sub-objects. As the weaver visits a particular node in the object graph, it must know how to diffuse that constraint to its underlings. Depending on the type of node, the diffusion process may be performed in different ways. To provide the weaver with the needed information to perform the propagation, a new type of aspect is needed. We call this a “strategy aspect.” Strategy aspects are specified independently of any particular model and are joined to the object graph by the weaver.
The intent of a strategy aspect is to provide a hook that the weaver may call in order to process the node-specific constraint propagations. Strategy aspects must be weaved prior to the weaving of constraint aspects. As the weaver visits each object node during constraint weaving, it may call upon the strategy aspect to aid in applying the constraint to that object. Thus, strategy aspects provide numerous ways for instrumenting object nodes in the object graph with constraints. An important design goal of this aspect language will be to decouple the strategy aspects from the constraint aspects. For more information on the specifics of this approach, please see our Communications of the ACM paper below.
Synthesizing Customized Components
We are also working on techniques to generate customized CORBA Component Model (CCM) components and containers. This generation will be driven by specific QoS properties that are provided from the higher-level specifications.
This work will provide a way to represent component functionality, along with system requirements, in a behavior/runtime-neutral way. The functional components will then be composed by generating the component wrappers with the behavior required to meet the QoS specifications.
Current ISIS Projects
Data Bases for Control, Modelling and Simulation and Intelligent sensory information systems
Embedded Realtime Databases for Engine Control Jörgen Hansson, Thomas Gustafsson
Diagnosis, Supervision and Safety
Diagnosis and Supervision for Vehicle Functions
Lars Nielsen, Erik Frisk, Marcus Klein, Lars Eriksson
Fault Isolation in Object Oriented Control Systems
Ulf Nilsson, Inger Klein, Dan Lawesson
Detection and Diagnosis in Control Systems
Lennart Ljung, Inger Klein, Fredrik Gustafsson, Anna Hagenblad, Mattias Krysander
Techniques for Developing Integrated Control and Information Systems
Resource Management in Wireless Communications Systems
Fredrik Gustafsson, Fredrik Gunnarsson, Erik Geijer Lundin, Frida Gunnarsson
Methods for Synthesis of Control and Supervision Functions
Supervision and Control of Industrial Robots
Svante Gunnarsson, Mikael Norrlöf, Erik Wernholt
Model Predictive Control for Systems Including Binary Variables
Anders Hansson, Torkel Glad, Daniel Axehill
Signal Processing in Integrated Control and Supervision Systems
Navigation Systems
Fredrik Gustafsson, Gustaf Hendeby
Signal Interpretation and Control in Combustion Engines
Lars Eriksson, Per Andersson
Sensor Fusion
Fredrik Gustafsson, Rickard Karlsson, Jonas Gillberg
ISIS project: Model Predictive Control for Systems Including Binary Variables
Background
Model Predictive Control (MPC) has proved to be a strong method to control large MIMO systems and has gained substantial interest in the industry, especially within the process and petrochemical fields. The main benefit is the possibility to handle constraints on various variables in the plant.
An extension to ordinary MPC is also to include the ability to use binary variables as control signals and as internal variables in the model description.
The central idea in MPC is to state the control problem as an optimization problem, and solve this optimization problem on-line repeatedly. When binary variables are used in MPC, the optimization problem to solve is changed from a Quadratic Program (QP) to a Mixed Integer Quadratic Program (MIQP), where the latter is known in general to be NP-hard.
The research is performed in collaboration with ABB Corporate Research.
As an example of linear MPC, let us consider a linear system
$displaystyle x_{k+1}$ $displaystyle =$ $displaystyle Ax_{k}+Bu_k$
A standard MPC controller is typically defined as
$displaystyle u_k$ $displaystyle =$ $displaystyle u_{kvert k}$
$displaystyle u_{(cdotvert k)}$ $displaystyle =$ $displaystyle arg min_{u_{(cdotvert k)}} J_k$
$displaystyle J_k$ $displaystyle =$ $displaystyle sum_{j=0}^{N-1} vertvert x_{k+jvert k}vertvert _{Q}^{2}+vertvert u_{k+jvert k}vertvert _{R}^{2}$
$displaystyle u$ $displaystyle in$ $displaystyle mathcal{U},~ x in mathcal{X}$
The optimization problem can be solved with quadratic programming if the control and state-constraints are linear. This is most often the case since amplitude and rate-constraints are the most common constraints in reality.
Research Area
As mentioned above, when binary signals are used in the MPC problem, an NP-hard problem has to be solved in each time instant. Our research is aiming at finding and exploring the structure in MIQP problems originating from MPC. The objective with this research is to speed up the solution of these optimization problems.
3-Dimensional Engineered Tissue - Isis Project No 1088
Researchers at the Department of Engineering Science have developed a nutrient circulation and scaffold system for 3-dimensional bulky tissue culture.Engineering tissue involves the seeding of appropriate cells in to a scaffold to form a bio-construct or matrix. The Oxford invention comprises of systems of capillaries made of semi permeable membranes where pore size is sufficiently small to keep cells from leaving the system. The capillary network is embedded within the scaffold made from biopolymers or synthetic polymers; cells attach to these scaffolds, are serviced by the capillaries and grow to form tissue.
Figures 1 & 2 are SEM pictures of rat bone marrow fibroblastic cells grown in perfused hollow fibre bioreactors. Cell growth and tissue formation are significantly higher than the control without the membrane capillary perfusion.
Figure 1Cells grow on collagen scaffold and produce new collagen fibrils
Figure 2Cells do not grow directly on the hollow fibres and do not block the pores
The Oxford Invention
The invention employs biodegradable porous membrane capillaries to mimic blood capillary network in the natural tissue. No other engineered tissue currently employs a system of capillaries that deliver nutrients and remove metabolic waste deep inside and tissue growth is no longer governed by diffusion of nutrients from outside the scaffold. Biodegradation of the capillary membrane is a useful feature because as time progresses the pores will widen allowing more nutrients in and waste out allowing tissue of greater density to be grown. As the tissue becomes bulkier, epithelial cells can be introduced in to the capillaries to promote blood vessel formation. This invention enables the culture of 3-dimensional tissues opening the possibility of growing more complex structures (such as complete organs).
Interactive Quantitation of Movement and Motion in 4D Microscopy
Overview
This project is part of research program '4D-imaging of Living Cells and Tissues'. Within this NWO funded program the analysis and quantitation of movement and motion in sequences of 3D confocal microscopy images is covered in this project.
To the end of motion analysis, several techniques are possible to extract information from image sequences. The development of these techniques are geared towards the efficient computation of motion quantities from large datasets.
Several techniques for motion quantitation are e.g. computation of optical flow or tracking of moving particles. Further research interests are quantitation and description of changes in object shape over time.
Sensemaker
SenseMaker - a multi-sensory, task-specific adaptable perception system
(IST-2001-34712)
Funded by the European Commission under the Life-like Perception Programme.
SenseMaker Demonstration site
Read more on the Sensemaker project.
Introduction
In contrast to the majority of artificial sensor machines, animal or plant organisms have available a vast array of modalities for interaction with their environment. Perception in living organisms is thus a complex process going beyond the simple detection and measure of sensory stimuli. It is dependent on integration of sensory information entering through a number of different channels and, in higher animals, is subject to modulation by higher cognitive processes acting at the cortical level, or via descending brain pathways to early stages of the sensory processing chain.
Internal representations of the sensory world are formed through unification of information from different sources. Interaction of information arriving through different sensory pathways may be hardware representation (click to enlarge) complementary for object identification. For example, auditory information - e.g. the "moo" of a cow - may help to identify the visual entity - i.e. the shape of the cow. Perception is not a fixed concept: it is significantly modulated by the internal state of the organism and by many contextual factors such as past experience, internal prediction, association, or on-going motor behaviour. Repetitive experience of a particular sensory-motor context may lead to functional habituation, based on central prediction of the sensory world. In contrast, active goal-directed exploration of sensory space may focus attention on a particular sensory modality, on a particular region of body space, or on a particular temporal sequence of motor command and re-afferent sensory input. The importance attributed to each sensory modality in constructing this integrated representation of the sensory world also depends on its working range: for instance, vision is often the preferred sense when light conditions are good but touch, hearing, or smell, or a combination of all three, may serve better for object recognition in the dark.
The range of sensory modalities available to living organisms is large. In order to deal with different sorts of environmental milieux, nocturnal or aquatic animals have developed a variety of senses, which higher animals, such as humans, do not possess. These include echolocation used by bats, the electric organ combined with active electric senses used by many tropical freshwater fish, the magnetic sense of certain moles and birds, infrared sensors used by certain snakes and beetles, lateral line mechanoreceptive or hydrodynamic sensory systems used by fish and some marine mammals, and sonar and ultrasound also used by aquatic mammals. Humans themselves have developed certain artificial sensors, which detect physical or chemical signals that are not perceived directly by living organisms, but which if made available in a biologically compatible manner might also extend the human sensory range. These include, for example, sensors to detect chemical pollutants, x-rays, radioactivity, electromagnetic fields or cosmic radiation.
The SenseMaker Project
The project has two principle aims. One is a project to combine biological, physical and engineering technological approaches in the production of a multi-sensory, task specific adaptable perception system. The second aim will be to push forward knowledge of natural systems and to find the links between what we consider as biological principles and the science of mathematics, which has been used effectively by humans in the construction of intelligent machines.
The first aim of the project is thus to conceive and implement electronic architectures that embody the features of living perceptual systems reviewed above and are able to merge sensory information obtained through different sensory modalities, into a unified perceptual representation of the environment. The architectural design of the SenseMaker machine will be based on biological principles of sensory receptor and nervous system function, inspired by experimental studies of several different sensory modalities. The system will include higher cognitive levels modelled on psychophysical research paradigms, whose function will be to implement dynamic rules of cross-modal integration, activity and time-dependent algorithms for internal prediction, goal directed attention, and transitions between dominant or convergent sensory modalities according to changing environmental parameters.
As in living systems, we will seek to create a representation of the proximal environmental space, which is largely independent of the sensory substrates. The electronic architecture will have the capacity for auto-reconfiguration, forming supplementary cross-connections between the sensory receptor level of a given modality and the higher stages of processing specific to another sensory modality. The ultimate ambition is thus to generate the capacity to create entirely new senses based on hybrid system design.
The Partners
The project partners are a multi-disciplinary team of biologists, neuroscientists, engineers and computer scientists and are comprised of:
No. Acronym Institution Group Researchers
1 UU Universtiy of Ulster, UK (co-ordinator) Intelligent Systems Engineering Laboratory, Faculty of Informatics Prof. TM McGinnity
Dr LP Maguire
2 UNIC Centre National de la Recherche Scientifique, France The Integrative and Computational Neuroscience Research Unit Dr Y Fregnac
Dr K Grant
Dr A Destexhe
Dr J Lorenceau
Dr T Bal
Dr D Shulz
3 UHEI Ruprecht-Karls-Universitaet Heidelberg, Germany Electronic Vision Group, Kirchhoff- Institut fur Physik Prof. Dr K Meier
Dr J Schemmel
4 TCD Trinity College, Ireland Visual Cognition Group, Institute for Neuroscience, Department of Psychology Dr F Newell
5 IXL ENSEIRB-CNRS Universite Bordeaux, France IXL Laboratory, School of Elctronics Dr Sylvie Renaud-Le Masson
SenseMaker Partner’s Websites
* Intelligent Systems Engineering Laboratory(ISEL), University of Ulster, Magee College, UK.
* Unité de Neurosciences Intégratives et Computationnelles (UNIC -CNRS).
* Electronic Vision(s) Group, at Ruprecht-Karls-Universitaet Heidelberg University, Germany.
* Psychology, Trinity College, Dublin, Ireland.
* ENSEIRB-CNRS Universite Bordeaux(IXL Laboratory), France.
Cognitive Computer Vision
Understanding human vision is an intriguing challenge. Vision dominates our senses for personal observation, societal interaction, and cognitive skill acquisition. At the Intelligent Sensory Information Systems group of the University of Amsterdam, we have recently started a new line of research into computational models for visual cognition.
modules Cognitive vision is the processing of visual sensory information in order to act and react in a dynamic environment. The human visual system is an example of a very well-adapted cognitive system, shaped by millions of years of evolution. Since vision requires 30% of our brain capacity, and what is known about it points to it being a highly distributed task interwoven with many other modules, it is clear that modelling human vision -let alone understanding it- is still a long way off. It is also clear that there is a close link between vision and our expressions of consciousness, but statements such as this add to the mystery rather than resolving it.
Understanding visual perception to such a level of detail that a machine could be designed to mimic it is a long-term goal, and one which is unlikely to be achieved within the next few decades. However, as computers are expected in the next twenty years to reach the capacity of the human brain, now is the time to start thinking about methods of constructing modules for cognitive vision systems.
For both biological and technical systems, we are examining which architectural components are necessary in such systems, and how experience can be acquired and used to steer perceptual interpretation. Since human perception has evolved to interpret the structure of the world around us, a necessary boundary condition of the vision system must be the common statistics of natural images.
Receptive Fields Neurobiological studies have found a dozen or so different types of receptive fields in the visual system of primates. As the receptive fields have evolved to capture the world around us, they are likely to be dual to our physical surrounding. These fields must be derived from the statistical structures that are probed in visual data by integration over spatial area, spectral bandwidth and time. In our cognitive vision research, we have initially derived several receptive field assemblies, each characterising a physical quantity from the visual stimulus.
As the visual stimulus involves a very reductive projection of the physical world onto a limited set of visual measurements, only correlates to relevant entities can be measured directly. Invariants transform visual measurements to true physical quantities, thereby removing those degrees of freedom not relevant for the observer. Hence, a first source of knowledge involved in visual interpretation is the incorporation of physical laws. In the recent past, we have used colour invariance as a well-founded principle to separate colour into its correlates of material reflection, being illuminant colour, highlights, shadows, shading components and the true object reflectance. Such invariants allow a system to be sensitive to obstacles, while at the same time being insensitive to shadows. The representation of the visual input into a plurality of invariant representations is a necessary information- reduction stage in any cognitive vision system.
To limit the enormous computational burden arising from the complex task of interpretation and learning, any efficient general vision system will ignore the common statistics in its input signals. Hence, the apparent occurrence of invariant representations decides what is salient and therefore requires attention. Such focal attention is a necessary selection mechanism in any cognitive vision system, critically reducing both the processing requirements and the complexity of the visual learning space, and effectively limiting the interpretation task. Expectation about the scene is then inevitably used to steer attention selection. Hence, focal attention is not only triggered by visual stimuli, but is affected by knowledge about the scene, initiating conscious behaviour. In this principled way, knowledge and expectation may be included at an early stage in cognitive vision. In the near future, we intend to study the detailed mechanisms behind such focal attention
Driver station : CARDS2 generic cockpit, fully instrumented with CAN multiplexed data acquisition
Force feedback :
DAE TRW steering wheel actuator and active gas pedal,
Passive actuators for gas and clutch pedals, gearbox lever and hand brake
Motion platform : 6-axes electromechanical from Hydraudyne
Image generation : Linux PC with last generation graphics boards
Projection :Barco Sim4 and LCD projectors
Software : SCANeR© II, developed by the Technical Center for Simulation
Multiagent Systems
In our group we carry out research on several aspects of (mainly) cooperative Multiagent Systems. We are especially interested in multiagent systems that are embedded in realistic real-world situations (for instance soccer playing robots), and have to successfully deal with motion noise, sensor uncertainty, and time constraints.
Current projects:
* Multiagent coordination
* Multiagent fusion systems
* Multiagent decision making under uncertainty
∑ Hierarchical awareness in distributed agents
Model-based detection/tracking of humans
The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. There are numerous important applications ranging from public safety, elderly care and intelligent vehicles to human motion capture/analysis. In this research we investigate generic techniques for person detection and tracking. Challenges are abound: what features to use (2D vs. 3D, model-based vs. model-free), how to efficiently (re)initialize the model, how to adapt a generic model to particular image data, how to deal with (self) occlusion and uncertainty, etc. Vacancy (Ph.D. Student)
Multi-Camera Human Tracking
∑ Past research has dealt with tracking humans across multiple cameras in the context of wide-area surveillance, distributed multi-camera surveillance.
COMMENTS
-