The popularity of the open source software development in the last decade has brought about an increased interest from the industry on how to use open source components, participate in the open source community, build business models around this type of software development, and learn more about open source development methodologies. There are now large communities working with open source software, e.g. in the mobile industry, but also in other industry sectors.
Open source software is of interest in software product development for several reasons. This type of software can for example be included in products as components, it can be used in development, and there is a possibility to transform proprietary software into open source in order to build a community around it.
We will present the results of a systematic literature review on research articles on open source in commercial organizations, where it was found that open source is investigated in the following areas: company participation in open source development communities, business models with open source, treating open source software as components in component-based development, and how open source processes can be used within a company. We will also present the results of a research initiative started to investigate what happens in the transformation of a product from proprietary to open source, and we will discuss how an open source process can be used within an organization without actually making the product open source.
Finding the right balance between software quality as experienced by users and the investment effort in software development is critical to stay competitive. In addition, the complexity is continuously increasing, making the scoping decisions for coming releases even more difficult. This presentation presents research results from the EASE project regarding tested methods for supporting scoping of quality requirements and for visualizing the scoping process in a large-scale setting.
Inspections are still the most efficient way of finding and correcting faults during the development of software. Different methods to inspect have been developed since M. Fagan 1976 proposed the method to be used in software industry. Two basic processes are needed during an inspection (1) comprehension and (2) fault searching. This seminar gives an overview of the inspection area and describe different methods used for inspection purposes, for example, scenario based reading. Furthermore, some discussion about the cognitive factors in inspections and how to estimate fault content from inspection data are provided.
Inspections have been used for a long time as an efficient method to improve the software product quality early in the software life cycle. However, a problem with inspections is that you do not know the actual level of quality achieved after the inspection. The attribute that affects the quality is how many faults that are left, not how many that have been found. Capture-recapture is a statistical method that can be used to estimate the number of remaining faults after an inspection. Through this, you can get an understanding of the inspected material's level of quality. This seminar introduces the Capture-recapture method. It describes the theory, the history but especially how the concept of Capture-recapture can be used with in a company to increase the knowledge of product quality throughout the development.
The seminar presents the emerging field of integrated control and CPU-time scheduling, where more general scheduling models and methods that better suit the needs of control systems are developed. This creates possibilities for dynamic and flexible integrated control and scheduling frameworks, where the control design methodology takes the availability of computing resources into account during design and allows on-line trade-offs between control performance and computing resource utilization. Control theory is used to compensate for the nondeterminism that is caused by task preemption and/or the computing platform. Control theory is also used to increase flexibility and allow increased resource utilization while maintaining acceptable control loop performance.
Software testing takes a substantial share of software project costs. Hence it is important to utilize test time and resources as efficient as possible. One important means for efficiency is to be systematic. This seminar adresses issues on test purposes - why is a specific test conducted? Is the purpose to find falts or to demonstrate performance? Based on the purpose, different strategies can be applied - should we cover some aspect of the code or should we use som statistical sampling technique? Depending on the strategy chosen, specific testing techniques can be applied, e.g. Equivalence partitioning or operational profile testing. An overview of different test strategies and test techniques is given in this seminar. The goal is to give the audience an insight to how test resources can be utilised by a more systematic approach, and to provide some ideas to how to start the improvement of a test process.
The requirements specification is the document containing the customer's expressed expectations on a system to be developed. Software testing aims at verifying that the requirements are fulfilled by an implementation. During test planning, test specifications are written to define what shall be tested, and to specify the expected results. However, a test specification contains information which is much the same as in a requirements specification. A new approach to specifying both requirements and tests reduces the extra work with the two types of specifications. A specification and test model is built. Either the requirements model is transformed into a test model, or it is extended to contain additional information required for the test model. The seminar presents the basic principles of the approach and shows the outcome of a case study to illustrate the method.
Improvement of software development processes includes that processes are changed. It is therefore important to evaluate the changes in order to see that the changes result in the effects that are anticipated. Often changes are made in order to improve the development time, the development cost or the quality of delivered products. Even if changes are made with the objective of improving one of these aspects, the other aspects should not be affected negatively. There is always a risk associated with changing a process. It is therefore important to evaluate the change proposal as early as possible in the change process. In the seminar, two ways of analyzing changes are presented. One method is based on interviews with people in the organization. Another method is based on letting a limited number of people in the organization try out the changed process in a controlled environment
XP is a programming process for problems with changing requirements as described in the recent book "eXtreme Programming - embrace change" by Kent Beck. XP aims at keeping the software simple and high quality by making use of a number of programming practices that have both short term and long term benefits. Among the practices can be mentioned: tests are written before implementation; programmers program in pairs; design is done throughout development by refactoring code; frequent releases allows the project to focus on the currently most important requirements. The talk gives an overview of the XP practices and relates to similar processes such as participatory design.
Object-oriented design patterns are abstract solutions to commonly occurring design problems. The seminal book "Design Patterns: Elements of Reusable Object-Oriented Software" by Gamma, Helm, Johnson, and Vlissides provides a catalog of 23 design patterns. These patterns support flexibility in the design, and are often used in object-oriented application frameworks. This talk gives a background to design patterns and their use in frameworks and presents a few design patterns in detail.
The key language constructs of classes, subclassing, and virtual methods constitute the common denominator of OO programming languages. In this talk we discuss generalized language mechanisms and show how they can benefit the construction of safe frameworks. We discuss four mechanisms: generalized block structure, generalized inheritance, generalized virtuality, and singular objects. All of these mechanisms are available in the BETA programming language, and some of them (inner classes and anonymous objects) have been adopted by Java. Examples are given in a Java-like language. The talk is based on an article with the same title by Görel Hedin and Jörgen Lindskov Knudsen that appears in the book "Implementing Application Frameworks. Object-Oriented Frameworks at Work", Wiley, 1999.
Domain-specific languages form programming support leading to simpler and safer programs, which is highly desirable for programming embedded systems and manufacturing equipment. However, language design is non-trivial, and the required development effort is often too costly for industrial projects. To cope with this problem, we have implemented a highly interactive tool, APPLAB, for developing and experimenting with domain-specific languages, and used it in the development of a programming environment for industrial robots. APPLAB supports the development of domain-specific languages by providing a high-level specification formalism (reference attributed grammars), and by providing integrated language-based support for editing syntax, semantics and the application program. The tool was connected to an industrial robot and confronted with programming topics encountered in typical industrial applications. Experiences from our full-scale prototype, including the development of a robot programming language, shows that interactive language development is very beneficial for programming of embedded systems. The talk is based on an article with the same title by Elizabeth Bjarnason, Görel Hedin, and Klas Nilsson that appears in Nordic Journal of Computing 6(1999), 36-55.
Realtime systems are growing in complexity and size and many facilities of object-orientation should be valuable to have available also here. Up to now OO hase, however, rarely been sued in connection with realtime systems, and in particular not in combination with hard deadlines and small systems due to technical restrictions. A current trend is however, to look at the langauge Java for realtime applications. Many of the advatnages of OO is present in this language: type-safety, secure implementation and constructs that scale well, and it also have defined mechanisms for concurrency. there are, however, still many technical problems that needs to be solved. The talk give an overview of Java and its support for realtime applications and then concentrate on the technical problems with using Java for systems with deadlines. Much of this part of the presentation is focused on a virtual machine for Java, IVM, which is used to study these problems and the solutions tried there.
Large companies and organizations have for a long time had access to global networks. Groups of developers are now able to work all over the world on the development of the same system. The potential is considerable due to the increased possibility of using personnel and competence from different locations. However, the way in which the work is divided and the handling of the interactions between different groups and individuals are largely affected by the fact that the staff is geographically dispersed. From different locations they may need to modify thousands of different files and sometimes the same files, within a single product. This creates new demands on the tools and the systems used for handling the coordination of the development. Much of these demands, but not all, are within the area of Configuration Management. In this seminar we demonstrate, by using a number of examples, different situations when distributed development may arise in a company. We classify some cases and highlight their specific characteristics. We also describe different architectures, work processes and CM tools and in what way they support the distribution in each of the various cases.
Object oriented technologies are starting to make their way into the design and implementation of embedded systems and other types of real-time systems. An example of this is the recent exploding interest in real-time Java. In order to properly utilise the power of this new technology, adequate programming languages are required. Automatic memory management, also known as garbage collection, is an important component of a modern safe object-oriented programming language, such as Java. Automatic memory management has traditionally been considered infeasible in hard real-time systems for reasons of unpredictability and inefficiency, a view that no longer holds thanks to the technology advancements of recent years. The state-of-the-art within automatic memory management for hard real-time systems is presented. The feasibility of using Java in embedded systems is discussed.
Current industry practice is to build embedded systems which contain both hardware and software solutions. Design time of such systems needs to be shorter in order to catch a market window for a product. This requirement forces designers to use new design methodologies which improve productivity. Current trend is to rise the design abstraction level and make most important decisions here while use automatic design tools for the rest of the design process. One methodology which grows very quickly is hardware/software co-design. This methodology makes it possible to specify a system in a unified way regardless of its final implementation. The enabling technology behind this approach is high-level synthesis which makes it possible to synthesis hardware from descriptions which are on the same level as ordinary programs. Using these approach we are able to specify a system using C/C++ or VHDL language and then by evaluating different system implementation possibilities select one which can be later automatically or semi-automatically implemented. This seminar will introduce basic concept of hardware/software co-design and present basic enabling technologies, such as high-level synthesis, system-level synthesis, hardware/software partitioning and communication synthesis. Current developments in the area of system specification languages standardization (e.g., SystemC) will be also discussed.
Constraint programming is a new programming paradigm which makes it possible to specify relationships or constraints among programmer-defined entities. The constraint programming system automatically maintains these constraints and ensures their satisfiability at any stage of the computation. There exist constraint programming systems over different domains which can be used in different application areas. For example, constraints over finite domain are used for solving many combinatorial optimization problems, such as planning, job shop scheduling and time tabling. They compete successfully with Integer Linear Programming approaches. Other constraint systems are defined for real and rational numbers, interval arithmetic or sets. These systems can be used to quickly solve many practical problems. They are commercially used in air traffic planning, health care, banking and automative industry, for example. In this seminar, we will make a short review of basic principles of constraint programming and show possible application areas. The demo which uses Constraint Logic Programming system for solving allocation and scheduling problems with different heterogeneous constraints will be also presented. It will mainly focus on applications of constraint programming in electronic design automation.