June 4th, Wednesday, 4:00 pm - 7:00 pm
Room: G. 04 - G. 06
COASTmed: Software Architectures for Delivering Customizable, Policy-Based Differential Web Services
Alegria Baquero
(University of California at Irvine, USA)
Inter-organizational exchange of personal information raises significant challenges in domains such as healthcare. First, trust among parties is not homogenous; data is shared according to complex relations. Second, personal data is used for unexpected, often divergent purposes. This tension between information need and provision calls for custom services whose access depends on specific trust and legal ties. Current Web services are "one-size-fits-all" solutions that do not capture nuanced relations nor meet all users' needs. Our goal is providing computation-enabled services which: (a) are accessible based on providers' policies, and; (b) allow user-controlled customization within the authority granted. We present our proposed solutions in COASTmed, a prototype for electronic health record (EHR) management which leverages novel architectural principles and formal policies.
Formal Verification Problems in a Big Data World: Towards a Mighty Synergy
Matteo Camilli
(University of Milan, Italy)
Formal verification requires high performance data processing software for extracting knowledge from the unprecedented amount of data coming from analyzed systems. Since cloud based computing resources have became easily accessible, there is an opportunity for verification techniques and tools to undergo a deep technological transition to exploit the new available architectures. This has created an increasing interest in parallelizing and distributing verification techniques. In this paper we introduce a distributed approach which exploits techniques typically used by the bigdata community to enable verification of very complex systems using bigdata approaches and cloud computing facilities.
Cross-Platform Testing and Maintenance of Web and Mobile Applications
Shauvik Roy Choudhary
(Georgia Tech, USA)
Modern software applications are expected to run on a variety of web and mobile platforms with diverse software and hardware level features. Thus, developers of such software need to duplicate the testing and maintenance effort on a wide range of platforms. Often developers are not able to cope with this increasing demand. Thus, they release software that is broken on certain platforms affecting a class of customers using such platforms. The goal of my work is to improve the testing and maintenance of cross-platform applications by developing automated techniques for matching such applications across the different platforms.
ReuseSEEM: An Approach to Support the Definition, Modeling, and Analysis of Software Ecosystems
Rodrigo Pereira dos Santos
(COPPE, Brazil; Federal University of Rio de Janeiro, Brazil)
Software Engineering (SE) community has discussed economic and social issues as a challenge for the next years. Companies and organizations have directly (or not) opened up their software platforms and assets to others, including partners and 3rd party developers, creating software ecosystems (SECOs). This scenario changes the traditional software industry because it requires mature research in SE dealing with an environment where business models and socio-technical networks can impact systems engineering and management, and reuse approaches. However, one strong inhibitor is the complexity in defining and modeling SECO elements to improve their comprehension and analysis. The main reason is the fact that this topic is emerging and no common sense on its concepts and relations exists yet. Thus, it is difficult to understand its real impacts in the SE industry. In this context, we propose an approach to support the definition, modeling and analysis of SECOs by exploring Software Reuse concepts in techniques in this area and treating nontechnical aspects in SE.
Summarization of Complex Software Artifacts
Laura Moreno
(Wayne State University, USA)
Program understanding is necessary for most software engineering tasks. Internal and external documentation help during this process. Unfortunately, this documentation is often missing or outdated. An alternative to solve this situation is automatically summarizing software artifacts. In the case of source code, a few approaches have been proposed to generate natural language descriptions of fine-grained elements of the code. This research focuses on the automatic generation of generic natural language summaries of complex code artifacts, such as, classes and change sets. In addition, these generic summaries will be adapted to support specific maintenance tasks.
Nirikshan: Process Mining Software Repositories to Identify Inefficiencies, Imperfections, and Enhance Existing Process Capabilities
Monika Gupta
(IIIT Delhi, India)
Process mining is to extract knowledge about business processes from data stored implicitly in ad-hoc way or explicitly by information systems. The aim is to discover runtime process, analyze performance and perform conformance verification, using process mining tools like ProM and Disco, for single software repository and processes spanning across multiple repositories. Application of process mining to software repositories has recently gained interest due to availability of vast data generated during software development and maintenance. Process data are embodied in repositories which can be used for analysis to improve the efficiency and capability of process, however, involves a lot of challenges which have not been addressed so far. Project team defines workflow, design process and policies for tasks like issue tracking (defect or feature enhancement), peer code review (review the submitted patch to avoid defects before they are injected) etc. to streamline and structure the activities. The reality may not be the same as defined because of imperfections so the extent of non-conformance needs to be measured. We propose a research framework `Nirikshan' to process mine the data of software repositories from multiple perspectives like process, organizational, data and time. We apply process mining on software repositories to derive runtime process map, identify and remove inefficiencies and imperfections, extend the capabilities of existing software engineering tools to make them more process aware, and understand interaction pattern between various contributors to improve the efficiency of project.
Verifying Incomplete and Evolving Specifications
Claudio Menghi
(Politecnico di Milano, Italy)
Classical verification techniques rely on the assumption that the model of the system under analysis is completely specified and does not change over time. However, most modern development life-cycles and even run-time environments (as in the case of adaptive systems), are implicitly based on incompleteness and evolution. Incompleteness occurs when some parts of the system are not specified. Evolution concerns a set of gradual and progressive changes that amend systems over time. Modern development life-cycles are founded on a sequence of iterative and incremental steps through which the initial incomplete description of the system evolves into its final, fully detailed, specification. Similarly, adaptive systems evolve through a set of adaptation actions, such as plugging and removing components, that modify the behavior of the system in response to new environmental conditions, requirements or legal regulations. Usually, the adaptation is performed by first removing old components, leaving the system temporarily unspecified-incomplete-, and then by plugging the new ones. This work aims to extend classical verification algorithms to consider incomplete and evolving specifications. We want to ensure that after any change, only the part of the system that is affected by the change, is re-analyzed, avoiding to re-verify everything from scratch.
Quantitative Properties of Software Systems: Specification, Verification, and Synthesis
Srđan Krstić
(Politecnico di Milano, Italy)
Functional and non-functional requirements are becoming more and more complex, introducing ambiguities in the natural language specifications. A very broad class of such requirements are the ones that define quantitative properties of software systems. Properties of this kind are of key relevance to express quality of service. For example, they are used to specify bounds on the timing information between specific events, or on their number of occurrences. Sometimes, they are also used to express higher level properties such as aggregate values over the multiplicity of certain events in a specific time window. These are practical specification patterns that can be frequently found in system documentation. The goal of this thesis is to develop an approach for specifying and verifying quantitative properties of complex software systems that execute in a changing environment. In addition, it will also explore synthesis techniques that can be applied to infer such type of properties from execution traces.
Automatic Generation of Cost-Effective Test Oracles
Alberto Goffi
(University of Lugano, Switzerland)
Software testing is the primary activity to guarantee some level of quality of software systems. In software testing, the role of test oracles is crucial: The quality of test oracles directly affects the effectiveness of the testing activity and influences the final quality of software systems. So far, research in software testing focused mostly on automating the generation of test inputs and the execution of test suites, paying less attention to the generation of test oracles. Available techniques for generating test oracle are either effective but expensive or inexpensive but ineffective. Our research work focuses on the generation of cost-effective test oracles. Recent research work has shown that modern software systems can provide the same functionality through different execution sequences. In other words, multiple execution sequences perform the same, or almost the same, action. This phenomenon is called intrinsic redundancy of software systems. We aim to design and develop a completely automated technique to generate test oracles by exploiting the intrinsic redundancy freely available in the software. Test oracles generated by our technique check the equivalence between a given execution sequence and all the redundant and supposedly equivalent execution sequences that are available. The results obtained so far are promising.
Preprint Available
Dynamic Data-Flow Testing
Mattia Vivanti
(University of Lugano, Switzerland)
Data-flow testing techniques have long been discussed in the literature, yet to date they are still of little practical relevance. The applicability of data-flow testing is limited by the complexity and the imprecision of the approach: writing a test suite that satisfy a data-flow criterion is challenging due to the presence of many test objectives that include infeasible elements in the coverage domain and exclude feasible ones that depend on aliasing and dynamic constructs. To improve the applicability and effectiveness of data-flow testing we need both to augment the precision of the coverage domain by including data-flow elements dependent on aliasing and to exclude infeasible ones that reduce the total coverage. In my PhD research I plan to address these two problems by designing a new data-flow testing approach that combines automatic test generation and dynamic identification of data-flow elements that can identify precise test targets by monitoring the program executions.
Holistic Recommender Systems for Software Engineering
Luca Ponzanelli
(University of Lugano, Switzerland)
Software maintenance is a relevant and expensive phase of the software development process. Developers have to deal with legacy and undocumented code that hinders the comprehension of the software system at hand. Enhancing program comprehension by means of recommender systems in the Integrated Development Environment (IDE) is a solution to assist developers in these tasks. The recommender systems proposed so far generally share common weaknesses: they are not proactive, they consider a single type of data-source, and in case of multiple data-source, relevant items are suggested together without considering interactions among them. We envision a future where recommender systems follow a holistic approach: They provide knowledge regarding a programming context by considering information beyond the one provided by single elements in the context of the software development. The recommender system should consider different elements such as development artifact (e.g., bug reports, mailing lists), and online resources (e.g., blogs, Q&A web sites, API documentation), developers activities, repository history etc. The provided information should be novel and emerge from the semantic links created by the analysis of the interactions among these elements.
On the Use of Visualization for Supporting Software Reuse
Marcelo Schots
(COPPE, Brazil; Federal University of Rio de Janeiro, Brazil)
Reuse is present in the daily routine of software developers, yet mostly in an ad-hoc or pragmatic way. Reuse practices allow for reducing the time and effort spent on software development. However, organizations struggle in beginning and coping with a reuse program. The APPRAiSER environment, proposed in this work, aims at providing reuse awareness according to each stakeholder’s needs for performing reuse-related tasks, by providing appropriate software visualization mechanisms. The long-term goal is to help introducing, instigating, establishing and monitoring software reuse initiatives, by decreasing the effort and time spent by stakeholders in performing reuse tasks.
Understanding the Redundancy of Software Systems
Andrea Mattavelli
(University of Lugano, Switzerland)
Our research aims to study and characterize the redundancy of software systems. Intuitively, a software is redundant when it can perform the same functionality in different ways. Researches have successfully defined several techniques that exploit various form of redundancy, for example for tolerating failures at runtime and for testing purposes. We aim to formalize and study the redundancy of software systems in general. In particular, we are interested in the intrinsic redundancy of software systems, that is a form of undocumented redundancy present in software systems as consequence of various design and implementation decisions. In this thesis we will formalize the intuitive notion of redundancy. On the basis of such formalization, we will investigate the pervasiveness and the fundamental characteristics of the intrinsic redundancy of software systems. We will study the nature, the origin, and various forms of such redundancy. We will also develop techniques to automatically identify the intrinsic redundancy of software systems.
Preprint Available
Study of Task Processes for Improving Programmer Productivity
Damodaram Kamma
(IIIT Delhi, India)
In a mature overall process of software development, productivity of a software project considerably depends on the effectiveness with which programmers execute tasks. A task process refers to the processes used by a programmer for executing an assigned task. This research focuses on studying the effect of task processes on programmer productivity. Our approach first identifies high productivity and average productivity programmers, then understands the task processes used by the two groups, the similarities between the task processes used by programmers within a group, and differences between the task processes in the two groups. This study is part of an ongoing study being conducted at a CMMi Level 5 software company. The results so far indicate that there are differences in task processes followed by high and average productivity programmers, and that it may be possible to improve the productivity of average productivity programmers by training them to use the task processes followed by the high productivity programmers.
Improving Enterprise Software Maintenance Efficiency through Mining Software Repositories in an Industry Context
Senthil Mani
(IIIT Delhi, India)
There is an increasing trend to outsource maintenance of large applications and application portfolios of a business to third parties, specializing in application maintenance, who are incented to deliver the best possible maintenance at the lowest cost. In a typical industry setting any maintenance project spans three different phases; Transition, Steady-State and Preventive Maintenance. Each phase has different goals and drivers, but underlying software repositories or artifacts remain the same. To improve the overall efficiency of the process and people involved in these different phases, we require appropriate insights to be derived from the available software repositories. In the past decade considerable research has been done in mining software repositories and deriving insights, particularly focussed on open source softwares. However, focussed studies on enterprise software maintenance in an industrial setting is severely lacking. In this thesis work, we intend to understand the industry needs on desired insights and limitations on available software artifacts across these different phases. Based on this understanding we intend to propose and develop novel methods and approaches for deriving desirable insights from software repositories. We also intend to leverage empirical techniques to validate our approaches both qualitatively and quantitatively.
Supporting Evolution and Maintenance of Android Apps
Mario Linares-Vásquez
(College of William and Mary, USA)
In recent years, the market of mobile software applications (apps) has maintained an impressive upward trajectory. As of today, the market for such devices features over 850K+ apps for Android, and 19 versions of the Android API have been released in 4 years. There is evidence that Android apps are highly dependent on the underlying APIs, and APIs instability (change proneness) and fault-proneness are a threat to the success of those apps. Therefore, the goal of this research is to create an approach that helps developers of Android apps to be better prepared for Android platform updates as well as the updates from third-party libraries that can potentially (and inadvertently) impact their apps with breaking changes and bugs. Thus, we hypothesize that the proposed approach will help developers not only deal with platform and library updates opportunely, but also keep (and increase) the user base by avoiding many of these potential API ”update” bugs
Preprint Available