Luc Bouganim, short resume, January 2016 (Detailed resume)

Civil status

French, born on April 15, 1967, married, 2 children


SMIS projectINRIA Saclay-Île de France – 91120 Palaiseau – France. Phone: +33 1 39 63 51 78 – Fax : +33 1 39 63 56 74 –


HDR UVSQ in 2006, PhD UVSQ in 1996, Master Paris 6 in 1991, INSA Engineer in 1990


French, English, Portuguese

Current position

Research Director (DR1) at INRIA, Vice-head of the SMIS project.


Advanced databases, databases, introduction to computer science (slides are here)

Research Actions


Folk-IS: Opportunistic Data Services in Least Developed Countries


The Necessary Death of the Block Device Interface


Trusted Cells: A Sea Change for Personal Data Services


MILo-DB: a Personal, Secure and Portable Database Machine


Flash Device Support for Database Management


PDS: Secure Personal Data Servers


uFLIP: Benchmarking Flash Devices


Databases and Cryptography


PlugDB: Personal data management


EBAC, Event-Based Access Control model


GhostDB: mixing public and sensitive data


Data Degradation


C-SXA, Tamper-resistant XML access control model


DISC, benchmarking secure chip


C-SDA, Secure access to relational databases


PicoDBMS, a smartcard DBMS


Efficient processing of queries with expensive functions


Adaptive execution models


Parallel query execution: load balancing

Awards (11)

Awarded Projects (1), Awarded Software (2), Awarded Publications (8)

Software (8)

EagleTree, uFLIP, PlugDB, GhostDB, C-SXA, C-SDA, PicoDBMS, DBS3

HDR tutor (1)

N. Anciaux (2014)

PhD Supervision (12)

J. Loudet (2015-), A. Katsouraki (2013-), N. Dayan (2012-2015), M. Bjørling (2011-2015),
L. Le Folgoc (2009-2012), Y. Guo (2008-2011), M. Benzine (2005-2010), F. Dang-Ngoc (2002-06),
N. Anciaux (2000-04), I. Manolescu (1998-2001), F. Porto (1999-2001), O. Kapitskaia (1996-99)

PhD reviewer (11)

P. Olivier, T. Sarni, S. Jacob, B. Chardin, W. Palma, R. Akbarinia, J.M. Busca,
B. Cautis, T. Sans, D. Coquil, R. Wong

Jury member (14)

N. Dayan, A. F. Sanoja Vargas, J.P. Lozi, M. Casalino, L.Le Folgoc, H.T.T. Truong, Y. Guo,
B. Biswas, M. Benzine, J. Cordry, D. Bromberg, F. Dang-Ngoc, N. Anciaux, L. Némirovski

PC member (59)

PC chair (1), Demonstration chair (1), PC member (52), Editorial Board (1), Organization (4)

Projects (12)

Participation (management and technical contributions) in 12 national and European projects


Vice-head of the SMIS team since its creation in 2004. Co-president of the "Commission Emplois Scientifiques" for INRIA Paris-Rocquencourt.

Publications (116)

Articles in International Peer-Reviewed Journal (20)

Articles in National Peer-Reviewed Journal (8), Book chapters, Proceedings (10)

Articles in International Peer-Reviewed Conference with Proceedings (34)

Articles in National Peer-Reviewed Conference with Proceedings (22)

Patents, registered software (7), PhD thesis (1), HDR (1), Miscellaneous (13)

Curriculum Vitae - Luc BouganimNovember 2015


1       General data, Education & professional experience ()



Birth date:

April 15, 1967.

Marital status:

Married, 2 children.

Office address:

SMIS projectINRIA Saclay-Île de France – 91120 Palaiseau – France.
Phone: +33 1 39 63 51 78 – Fax : +33 1 39 63 55 96

Home address:

29, rue Boussingault, 75013 Paris


Current Position:

Director of research (DR2) at INRIA, Vice-head of the SMIS project.

Education ()


Accreditation to supervise research (Habilitation à Diriger des Recherches – HDR). University of Versailles (UVSQ). Title: "Sécurisation du contrôle d’accès dans les bases de données". Jury members: F. Cuppens (Reviewer), P. Paradinas (Reviewer), D. Shasha (Reviewer), C. Kirchner, P. Pucheral, P. Valduriez.


Ph.D. in Computer Science, University of Versailles (UVSQ). Title: '' Equilibrage de charge lors de l'exécution parallèle de requêtes sur des architectures multiprocesseurs hybrides''. Jury members: A. Flory (reviewer), M. Kersten (reviewer), C. Delobel, G. Gardarin, P. Pucheral, P. Valduriez (advisor)


Master in Computer Science ("Diplôme d'Etudes Approfondies"). University of Paris 6, Advisor: G. Gardarin


Engineer in Computer Science of the "Institut National des Sciences Appliquées" (INSA) de Lyon.

Languages ()







Professional Experience ()

Dec. 2015 – Now

Sept. 2013 – Nov. 2015

Sept. 2006 – Aug. 2013:

Research director (DR1) at INRIA Saclay-Île de France. Vice-head of the SMIS project.

Research director (DR1) at INRIA Paris-Rocquencourt. Vice-head of the SMIS project.

Research director (DR2) at INRIA Paris-Rocquencourt. (From April to August 2008, at University of Copenhagen - DIKU). Vice-head of the SMIS project.

Sept. 2002 - Aug. 2006:

Researcher (CR1) at INRIA Rocquencourt. Vice-head of the SMIS project.

Sept. 1997 - Aug. 2002:

Assistant Professor at University of Versailles (UVSQ) - PRiSM laboratory.

Dec. 1996 - Dec. 2001:

Research Consulting at INRIA Paris-Rocquencourt.

Dec. 1996 – Aug. 1997:

Post-doc at INRIA Paris-Rocquencourt.

Sept. 1993 - Dec. 1996:

Ph.D. thesis - Rodin Project - INRIA Paris-Rocquencourt.

Nov. 1991 - July 1993:

Military service in cooperation: Organized the French participation to the ECO’92 Earth Summit.

Nov. 1990 - Oct. 1991:

Research Trainee, Bull, Les Clayes sous Bois.

2       Teaching ()

Since I was formerly assistant professor at University of Versailles (teaching around 250 hours/year), I continue teaching in universities and engineers schools (around 80 hours/year). I have realized several lecture slides, available here  for undergraduate students. In addition, I realize every year some lecture slides for master students focusing on more advanced topics (e.g., database security, tutorial on database on chips).



Privacy by Design Information Systems: Raise master and engineer students' awareness of Privacy-by-Design concepts, secure hardware and embedded programming.


UVSQ (Master M1) , ENSIIE (2nd year students)


Advanced databases: Study of the internal functioning of a DBMS, in order to infer their behavior with respect to performance. Introduction to fundamental topics in database research.




Databases: Providing the theoretical and practical basis necessary for defining and using a relational database (application on Oracle).


UVSQ (Master M1), U. Paris Sud.


Introduction to computer science: Bringing absolute beginners to master the practice of computer tools and to understand its functioning principles.


UVSQ (Master M1), U. Paris Sud.

3        Research activities ()

Research actions

[RA19]  Folk-IS: Opportunistic Data Services in Least Developed Countries (2013- …): According to a wide range of studies, IT should become a key facilitator in establishing primary education, reducing mortality or supporting commercial initiatives in Least Developed Countries. The main barrier to the development of IT services in these regions is not only the lack of communication facilities, but also the lack of consistent information systems, security procedures, economic and legal support, as well as political commitment. We propose the vision of an infrastructure-less data platform well suited for the development of innovative IT services in Least Developed Countries. We propose a participatory approach, where each individual implements a small subset of a complete information system thanks to highly secure, portable and low-cost personal devices as well as opportunistic networking, without the need for any form of infrastructure. We do not argue that Folk-IS is the ultimate solution. The future of IT in LDC will probably be multiform, the problem being important and complex enough to leave room for complementary initiatives. Folk-IS has the salient characteristics to enable a smooth and incremental deployment of an information system in a purely infrastructureless context while taking advantage of existing elements of infrastructure, if any, to improve its own behavior.

 [RA18] The Necessary Death of the Block Device Interface (2013-2015): Solid State Drives (SSDs) are replacing magnetic disks as secondary storage for database management, as they offer orders of magnitude improvement in terms of bandwidth and latency. In terms of system design, the advent of SSDs raises considerable challenges. First, the storage chips, which are the basic component of a SSD, have widely different characteristics – e.g., copy-on-write, erase-before-write and page-addressability for flash chips vs. in-place update and byte-addressability for PCM chips. Second, SSDs are no longer a bottleneck in terms of I/O latency forcing streamlined execution throughout the I/O stack. Finally, SSDs provide a high degree of parallelism that must be leveraged to reach nominal bandwidth. This evolution puts database system researchers at a crossroad. The first option is to hang on to the current architecture where secondary storage is encapsulated behind a block device interface. This is the mainstream option both in industry and academia. This leaves the storage and OS communities with the responsibility to deal with the complexity introduced by SSDs in the hope that they will provide us with a robust, yet simple, performance model. We showed that this option amounts to building on quicksand. We illustrated our point by debunking some popular myths about flash devices and by pointing out mistakes in the papers we have published throughout the years. The second option is to abandon the simple abstraction of the block device interface and reconsider how database storage managers, operating system drivers and SSD controllers interact. We gave our vision of how modern database systems should interact with secondary storage. This approach requires a deep re-design of the database system architecture, which is the only viable option for database system researchers to avoid becoming irrelevant.

 [RA17] Trusted Cells: A Sea Change for Personal Data Services (2013-…): How do you keep a secret about your personal life in an age where your daughter’s glasses record and share everything she senses, your wallet records and shares your financial transactions, and your set-top box records and shares your family’s energy consumption? Your personal data has become a prime asset for many companies around the Internet, but can you avoid -- or even detect -- abusive usage? Today, there is a wide consensus that individuals should have increased control on how their personal data is collected, managed and shared. Yet there is no appropriate technical solution to implement such personal data services: centralized solutions sacrifice security for innovative applications, while decentralized solutions sacrifice innovative applications for security. In this paper, we argue that the advent of secure hardware in all personal IT devices, at the edges of the Internet, could trigger a sea change. We propose the vision of trusted cells: personal data servers running on secure smart phones, set-top boxes, secure portable tokens or smart cards to form a global, decentralized data platform that provides security yet enables innovative applications. We motivate our approach, describe the trusted cells architecture and define a range of challenges for future research.

[RA16]  MILo-DB: a Personal, Secure and Portable Database Machine (2011-2013): Mass-storage secure portable tokens are emerging and provide a real breakthrough in the management of sensitive data. They can embed personal data and/or metadata referencing documents stored encrypted in the Cloud and can manage them under holder’s control. Mass on-board storage requires efficient embedded database techniques. These techniques are however very challenging to design due to a combination of conflicting NAND Flash constraints and scarce RAM constraint, disqualifying known state of the art solutions. To tackle this challenge, we proposed a log-only based storage organization and an appropriate indexing scheme, which (1) produce only sequential writes compatible with the Flash constraints and (2) consume a tiny amount of RAM, independent of the database size. We showed the effectiveness of this approach through a comprehensive performance study.

[RA15]  Flash Device Support for Database Management (2010-2013): While disks have offered a stable behavior for decades, thus guaranteeing the timelessness of many database design decisions, flash devices keep on mutating. Their behavior varies across models, across firmware updates and possibly in time for the same model. Many researchers have proposed to adapt database algorithms for existing flash devices; others have tried to capture the performance characteristics of flash devices. However, today, we neither have a reference DBMS design nor a performance model for flash devices: database researchers are running after flash memory technology. In this study, we take the reverse approach and we define how flash devices should support database management. We advocate that flash devices should provide guarantees to a DBMS so that it can devise stable and efficient IO management mechanisms. Based on the characteristics of flash chips, we define a bimodal FTL that distinguishes between a minimal mode where sequential writes, sequential reads and random reads are optimal while updates and random writes are forbidden, and a mode where updates and random writes are supported at the cost of sub-optimal IO performance. Interestingly, the guarantees of a minimal mode have been taken for granted in many articles from the database research literature. Our point is that these guarantees are not a law of nature: we must guide the evolution of flash devices so that they are enforced. An important point is that providing optimal mapping guarantees does not hinder competition between flash device manufacturers. On the contrary, they can compete to (a) bring down the cost of optimal IO patterns (e.g., using parallelism), and (b) bring down the cost of non-optimal patterns without jeopardizing DBMS design. Future work includes designing and building a bimodal FTL in collaboration with a flash device manufacturer.

[RA14]  PDS: Secure Personal Data Servers (2010-2013): An increasing amount of personal data is automatically gathered and stored on servers by administrations, hospitals, insurance companies, etc. Citizen themselves often count on Internet companies to store their data and make them reliable and highly available through the Internet. However, these benefits must be weighed against privacy risks incurred by centralization. In this study, we consider a radically different way of considering the management of personal data. We build upon the emergence of new portable and secure devices combining the security of smart cards and the storage capacity of NAND Flash chips. By embedding a full-fledged Personal Data Server in such devices, user control of how her sensitive data is shared by others (by whom, for how long, according to which rule, for which purpose) can be fully reestablished and convincingly enforced. To give sense to this vision, Personal Data Servers must be able to interoperate with external servers and must provide traditional database services like durability, availability, query facilities, transactions. We proposed an initial design for the Personal Data Server approach, identified the main technical challenges associated with it and sketched preliminary solutions.

[RA13]  uFLIP: Benchmarking Flash Devices (2008-2011): Thanks to its excellent properties in terms of read performance, energy consumption and shock resistance, NAND Flash has become a credible competitor even for traditional disks on high-end servers. Our goal is to study how database systems adapt to this new form of secondary storage. Before we can answer this question, we need to fully understand the performance characteristics of flash devices. We have designed a benchmark, called uFLIP, to cast light on all relevant usage patterns of current, as well as future, flash devices. uFLIP is a set of nine micro-benchmarks based on IO patterns (i.e., a sequences of IOs). Each micro-benchmark is a set of experiments designed around a single varying parameter that affects either time, size, or location. Thanks to uFLIP, we established which kind of IOs should be favored (or avoided) when designing algorithms and architectures for flash-based systems. We also set up a benchmarking methodology that takes into account the particular characteristics of flash devices. This work was done in cooperation with the University of Copenhagen and the Reykjavík University. More recently, we have also devised a mechanism for measuring the energy consumption of flash devices. While energy consumption cannot be traced to individual IOs, we can associate energy consumption figures to IO patterns, which helps understand further the behavior of the devices.

[RA12]  Databases and Cryptography (2009-2011): We have initiated, in 2009, a cooperation with members of the SECRET project-team which focuses on the use of cryptographic techniques for ensuring the confidentiality and integrity of data stored in databases. Using cryptographic techniques ‘as-is’ to provide the aforementioned guarantees has a large negative impact on the database size (e.g., a 20 bytes MAC is added to each encrypted attribute value in Oracle 11g TDE to ensure data authenticity) and on the database performance, thus motivating many on-going research on that topic. In a first step, we have made an exhaustive study of the state of the art revealing that many techniques devised are not secure. We then proposed a set of lightweight crypto-protection building blocks allowing protecting small granularity data as well as performing selections on encrypted data.

[RA11]  PlugDB: Personal data management (2007-2011): Existing solutions for sharing and manipulating personal data (medical, social, administrative, commercial, professional data, etc.) are usually server-based. These solutions suffer from two weaknesses. The first one lies in the impossibility to access the data without a permanent, reliable, secured and high bandwidth connection. The second weakness is the lack of security warranties as soon as the data leaves the security realm of the server. We address these limitations with the help of a new secured device named SPT (Secure Portable Token). A SPT combines the intrinsic security of smart cards with the storage capacity of USB keys (several GB soon) and the universality of the USB protocol. The innovation lies in the association of sophisticated data management techniques with cryptographic protocols embedded in an SPT-like device. More precisely, a specific DBMS engine must be designed to match the peculiarities of the SPT storage memory (NAND Flash) and the limited processing capacities of its microcontroller. New cryptographic protocols dedicated to the protection of the data at rest as well as to the data in transit in collaborative scenarios must also be designed.

[RA10]  EBAC, Event-Based Access Control model (2009-2011): We focused on the protection of personal data (also called micro-data) where concepts like user’s consent, purpose declaration and limited retention play a central role. The challenge is to define models as simple as possible to help a user calibrating a predefined access control policy to the user’s specific situation and sensitivity. We started to study how user consent could be more easily expressed in the context of Electronic Health Record (EHR) systems. Indeed, access control policies usually defined to regulate accesses to EHR systems are far too complex to expect collecting an enlightened consent of the patients on them, as required by the law. This is mainly due (1) to a huge number of rules (huge number of practitioners with a diversity of roles and privileges) and (2) to the intrinsic complexity of the data to be protected (which data reveals which pathology?). To tackle this issue, we are designing an Event-Based Access Control model (EBAC) helping the patient to mask sensitive records in her folder. The EBAC masking rules take priority over the default access control rules and are defined on metadata easy to manage by the user. Any document added to a folder is described by an event, events are grouped by episodes (i.e., a set of events sharing a common masking policy, like "MyAbortion", "MySecondDepression") and the participation of a practitioner to an episode is regulated by a relation of confidence. This work is still at a preliminary stage.

[RA9]    GhostDB: mixing public and sensitive data (2007-2009): We focus on the management of database mixing public and sensitive data. People talk about privacy, but give it up very easily, especially when faced with complex security procedures that offer only conditional guarantees. This implies that for people’s sensitive data to be protected, the cost to protect it must require little physical effort and must perform well. We proposed a system whereby people carry hidden sensitive data on a tamper-resistant USB key and they plug that key into a personal computer when they need to link their hidden data with visible public data, all with the assurance that no hidden data will ever go out in the open. The principal novelties follow directly from the challenges of implementing this mode of operation: (1) how to declare which data should be visible and hidden simply and how to query it, (2) how to index the data, and (3) which query processing strategies to use to link public and private data hosted on extremely unequal devices (standard computer and smart USB key). Our philosophy is to make the users life as easy as possible while efficiently supporting SQL queries on arbitrarily large databases. Efficiency considerations on the small RAM Secure USB key lead us to the design of generalized join indexes, Bloom filters for approximate filtering, the postponement of selections until after joins in certain cases, and algorithms that reflect the differences in read/write performance in the Secure USB key. A prototype has been implemented and demonstrated at VLDB and BDA conferences. This initial work has recently been extended to tackle the case of aggregate computations performed on a mix of hidden sensitive data (kept on a tamper-resistant device) and of public data (available on a public server). The goal is to produce aggregates to data warehouses for OLAP purposes, and to reveal exactly what is desired, neither more nor less.

[RA8]    Data Degradation (2006-2009): We are tackling the limited data retention problem. Our daily life activity leaves digital trails in an increasing number of databases (commercial web sites, internet service providers, search engines, location tracking systems, etc). Personal digital trails are commonly exposed to accidental disclosures and ill-intentioned scrutinization resulting from negligence, piracy and abusive usages. No one is sheltered because common events, like applying for a job, can suddenly make our history a precious asset. By definition, access control fails preventing trail disclosures, and anonymity techniques are often not usable in this context, motivating the integration of the Limited Data Retention principle in legislations protecting data privacy. By this principle, data is withdrawn from a database after a predefined time period. However, this principle is difficult to apply in practice, leading to retain useless sensitive information for years. To address this issue, we propose the Data Degradation Model where sensitive data undergoes a progressive and irreversible degradation from an accurate state, to degraded but still informative states, up to complete disappearance when the data becomes useless, along with suitable query and transaction semantics. The benefit of this model is twofold: (i) the amount of accurate data, and thus the privacy offence resulting from a trail disclosure, is drastically reduced; (ii) the model is flexible enough to remain in line with the applications purposes, and thus favors data utility. We have recently formalized those benefits, and shown (under reasonable assumptions) to which extent data degradation overcomes basic implementations of the limited data retention principle. In addition, the data degradation model strongly impacts core database techniques, opening interesting research issues. We made a preliminary study into that direction, by proposing database storage and indexing structures, logging and locking mechanisms adapted to data degradation, to show the practical feasibility of the model.

[RA7]    C-SXA, Tamper-resistant XML access control model (2004-2008): The erosion of trust put in traditional database servers and in Database Service Providers and the growing interest for different forms of selective data dissemination are different factors that lead to move the access control from servers to clients. Different data encryption and key dissemination schemes have been proposed to serve this purpose. By compiling the access control rules into the encryption process, all these methods suffer from a static way of sharing data. We proposed a tamper-resistant, client-based, XML access right controller supporting flexible and dynamic access control policies. The access control engine is embedded in a hardware secure device and therefore must cope with specific hardware resources.

[RA6]    DISC, benchmarking secure chip (2003-2008): Preliminary studies led us to design the first full-fledged DBMS embedded in a smart card, called PicoDBMS (see [RA4]). Based on the experience of PicoDBMS performance evaluation, we designed a benchmark, called DiSC, dedicated to secure chip DBMSs, in order to (1) compare the relative performance of candidate storage and indexation data structures, (2) predict the limits of on-chip applications, and (3) provide co-design hints to help calibrating the resources of a future secure chip to meet the requirements of on-chip data intensive applications. This work concludes the PicoDBMS study.

[RA5]    C-SDA, Secure access to relational databases (2001-2003): While encryption has been used successfully for years to secure communications, database encryption introduces new theoretical and practical issues: how to execute efficiently database queries over encrypted data, how to conciliate declarative (i.e., predicate based) access rights with encryption, how to distribute encryption keys between users sharing part of the database, how to take advantage of secured computing devices? We proposed a solution called C-SDA (Chip-Secured Data Access), which allows querying encrypted data while controlling predicate-based personal privileges. C-SDA is embedded into a smart card to prevent any tampering to occur on the client side. This cooperation of hardware and software security components allows reestablishing the orthogonality between access right management and data encryption. The CNRS (Centre National de la Recherche Scientifique) took out a patent on C-SDA.

[RA4]    PicoDBMS, a smartcard DBMS (1999-2004): The interest of embedding a DBMS on a smartcards is linked to the high degree of security brought by the card, making it an ideal support for applications such as classified military files or personal medical files. We have proposed the first full-fledged DBMS embedded in a smart card, called PicoDBMS. PicoDBMS aims at managing shared secured portable folders. The difficult problem is more on tackling the asymmetry between hardware resources (e.g., powerful CPU, tiny RAM) than simply on tackling the resource scarcity. This hardware setting entails a thorough re-thinking of existing database techniques. Three years of joint efforts with our industrial partner Axalto (design optimization, new hardware platform, OS adaptation) were necessary to get a convincing prototype.

[RA3]    Efficient processing of queries with expensive functions (1999-2002): The Internet makes possible sharing data and programs within groups of scientists. A data integration system is a system allowing on one hand publishing resources (data and programs) and on the other hand, querying these resources transparently. We have first outlined the different problems rose by queries over expensive functions and studied their interactions, most importantly with the optimization phase. We have then proposed an architecture, operators and specific algorithms allowing to: (i) minimize data transfers and program calls, (ii) maximize the parallelism with a policy of equitable (fair) sharing of resources, and (iii) maximize the early output rate. We use cache, asynchronism and intra-operator parallelism, within a framework of pipelined query execution.

[RA2]    Adaptive execution models (1997-1999): Execution plans produced by traditional query optimizers may perform poorly for several reasons: cost estimates may prove inexact; the available memory at query execution time may be found insufficient; and the data, if it comes from distant sources, may not be readily available when it is required. In a first step, I have addressed the problem of memory management during the execution of complex queries. In the proposed execution strategy, the execution plan is dynamically modified as soon as memory proves to be insufficient. The comparisons with several static strategies outlined largely improved performance. I have then considered the problem of data availability during the execution of integration queries involving distant data sources. The approach is both proactive, producing a step-by-step ordering of several query fragments, and reactive, executing these fragments according to the arrival of distant data. Thanks to these works, we were able to present a generic architecture, including dynamic, adaptive features at several levels in the execution engine.

[RA1]    Parallel query execution: load balancing (thesis, 1993-1996): During parallel query execution of database queries, one of the major obstacles from obtaining good performance resides in the load balancing among several processors, the response time of the query being the response time of the most loaded processor. In the case of bad load balancing, load redistribution is required. I have successively addressed three types of parallel architectures: (i) shared memory; (ii) hierarchical; and (ii) with non-uniform memory (NUMA). For each of them, I have proposed execution models with dynamical load redistribution at intra- and inter-operator levels, minimizing the overheads of the redistribution. For the hierarchical architecture, for example, the proposed execution model allows a dynamic two-level load repartition (local on a shared-memory node and global among the nodes). This model allows maximizing the local load repartition, thus reducing the need for global load repartition, which is a source of high overhead. This work has been validated through measures on the DBS3 prototype, and through simulations.


Awarded Projects: ()

[A11]     Best poster award for the PlugDB project (see [P9]) during the STIC conference organized by ANR (Agence Nationale de la Recherche) and sponsored by OSEO (January 2010, 261 participating projects). ‘Le projet PlugDB a été distingué pour ses qualités de communication : clarté du message, qualité du visuel et de la démonstration. Il présente la spécificité d’être un projet issu de recherche (financé par le ministère de la recherche dans le cadre du RNTL - Réseau National en Technologies Logicielles - en 2006) qui s'est contraint à aboutir au plus vite sur des applications concrètes.’ (Oseo).

Awarded Software: ()

[A10]     Gold award (with P. Pucheral, F. Dang Ngoc and N. Dieu) of the SIMagine’05 international software contest organized by Sun Microsystems, Axalto and Samsung Electronics (more than 300 participating teams) for the MobiDiQ project (Mobile Digital Quietude). MobiDiQ is a fair Digital Right Management (DRM) engine embedded in a SIM card (cell phone smart card). Fair DRM means preserving the interest of all parties in a lucrative or non-profit dissemination of digital contents (e.g., free access to cultural contents for students or artists, parental or teacher control prohibiting access to non-ethical contents). MobiDiQ is nothing but an application scenario relying on the C-SXA technology. Complex and dynamic access control policies are defined on XML digital contents depending on personal data (e.g., history, user profile, etc.) stored securely on the SIM card. Link

[A9]      Silver Award (with P. Pucheral and F. Dang Ngoc) of the e-gate open 2004 international software contest organized by Sun, Axalto and STM (84 participating teams from 22 countries) for the C-SXA project (Chip-Secured XML Access). C-SXA can be used either to protect the privacy of on-board personal data or to control the flow of data extracted from an external source. By separating encryption from access control, several and personalized access control policies can therefore be defined on the same document, reflecting different privileges for different users. These policies can easily evolve by updating the access control rules without impacting the document encryption. Thanks to these features, this prototype addresses important emerging applications (e.g., portable folders, DRM, secure data sharing among family members, friends or business partners) that cannot be tackled by existing technologies. Link

Awarded Publications ()

[A8]      Best paper award of CIDR’2009, biennial Conference on Innovative Data Systems Research, uFLIP: Understanding Flash IO Patterns [IC24].

            The paper's impact is high as flash memory is becoming a popular technology, and offers the DB audience a hands-on introduction to flash memory from a very practical perspective. The comparison of flash to regular drives is methodical and intuitive and the authors outline the "todo" and "not to do" when designing I/O techniques closely to the system designer. The paper fits squarely into CIDR's objectives as it has a scientific but also an admirably strong practical side, and it opens avenues for future research work in database systems.’ Anastasia Ailamaki.

[A7]      Selected publication (Top 5) of BDA 2004, Gestion sur le client de contrôle d'accès pour des documents XML [NC12], selected for publication in the ISI journal [NJ7]

[A6]      Selected publication (Top 5) of BDA 2002, Efficient Data and Program Integration Using Binding Patterns [NC10], selected for publication in the TSI journal [NJ6]

[A5]      Best paper award of the 26th International Conference on Very Large Data Bases, VLDB 2000: PicoDBMS: Scaling down Database Techniques for the Smartcard [IC7].

[A4]      Selected publication (Top 4) of the 7th International Conference on Information and Knowledge Management, ACM CIKM’98: Memory Adaptive Scheduling for Large Query Execution [IC4]. An extended version appears in the Networking and Information Systems Journal (NISJ) [IJ1].

[A3]      Best Paper Award of the 3rd International Conference of the ACPC’96: Skew handling in the DBS3 Parallel Database System [IC2].

[A2]      Selected publication (Top 4) of BDA 1996, Répartition dynamique de la charge dans un système de base de données parallèle hiérarchique [NC4], selected for publication in the ISI journal [NJ2]

[A1]      Selected publication (Top 4) of BDA 1994, Performance du SGBD Parallèle DBS3 sur la Machine KSR1 [NC2], selected for publication in the ISI journal [NJ1]

Software development ()

[So8]     EagleTree: EagleTree is an extensible, customizable SSD simulator designed to enable deep analyses of the interplay between the FTL, block management scheme, IO scheduling policy and application workload. It is able to generate visual illustrations of a host of performance metrics. EagleTree is available for Linux, and is licensed under GPL. The prototype is led by Niv Dayan (see [Ph10]).

[So7]     The uFLIP Benchmark: It is amazingly easy to get meaningless results when measuring flash devices, partly because of the peculiarity of flash memory, but primarily because their behavior is determined by layers of complex, proprietary, and undocumented software and hardware. uFLIP is a component benchmark for measuring the response time distribution of flash IO patterns, defined as the distribution of IOs in space and time. uFLIP includes a benchmarking methodology which takes into account the particular characteristics of flash devices. The source code of uFLIP, available on the web has been registered at APP in 2009.

[So6]     PlugDB engine: More than a stand-alone prototype, PlugDB is a complete architecture dedicated to a secure and ubiquitous management of personal data. PlugDB aims at providing an alternative to a systematic centralization of personal data. To meet this objective, the PlugDB architecture lies on a new hardware device called Secure Portable Token (SPT). Roughly speaking, a SPT combines a secure microcontroller (similar to a smart card chip) with a large external Flash memory (Gigabyte sized) on a USB key form factor. The SPT can host data on Flash (e.g., a personal folder) and safely run code embedded in the secure microcontroller. PlugDB engine is the masterpiece of this embedded code. PlugDB engine manages the database on Flash (tackling the peculiarities of NAND Flash storage), enforces the access control policy defined on this database, protect the data at rest against piracy and tampering (thanks to cryptographic protocols), executes queries (tackling low RAM constraint) and ensure transaction atomicity. Part of the on-board data can be replicated on a server (then synchronized) and be shared among a restricted circle of trusted parties through crypto-protected interactions. PlugDB is being experimented in the field to implement a secure and portable medical-social folder helping the coordination of medical care and social services provided at home for dependent people. Developed with N. Anciaux, P. Pucheral, S. Yin, Y. Guo, L. Le Folgoc and A. Troussov.

[So5]     GhostDB: GhostDB is a relational database engine embedded on a secure USB key (a large Flash persistent store combined with a tamper and snoop-resistant CPU and small RAM) that allows linking private data carried on the USB Key and public data available on a public server. GhostDB ensures that the only information revealed to a potential spy is the query issued and the public data accessed. Queries linking public and private data entail novel distributed processing techniques on extremely unequal devices and in which data flows in a single direction: from public to private. The GhostDB prototype has been developed in C and currently runs on a software simulator of the USB device. This simulator is I/O accurate, meaning that it delivers the exact number of pages read and written in Flash, thus allowing assessing the GhostDB performance. The GhostDB prototype has been recently demonstrated at the VLDB’07 and BDA’07 conferences. Developed with M. Benzine, N.Anciaux, P. Pucheral and C. Salperwyck

[So4]     C-SXA: Chip-Secured XML Access (C-SXA) is an XML-based access rights controller embedded in a smart card. C-SXA evaluates user’s privileges on a queried or streaming XML encrypted document and delivers the authorized subset of this document. Compared to existing methods, C-SXA supports fine grain and dynamic access control policies by separating access control issues from encryption. Application domains cover the exchange of confidential data among a community of users (as well as selective data dissemination. A first C-SXA prototype has been developed on a hardware cycle-accurate simulator to assess the medium-term viability of the approach in terms of performance. Then, a C-SXA engine has been developed in JavaCard on a real smart card platform and has been demonstrated at the ACM SIGMOD’05 conference. Developed with F. Dang-Ngoc, N. Dieu and P. Pucheral.

[So3]     C-SDA : The C-SDA (Chip-Secured Data Access) architecture allows querying encrypted data while controlling fine grain and dynamic personal privileges. C-SDA is a client-based security component acting as an incorruptible mediator between a client and an encrypted database. This component is embedded into a smart card to prevent any tampering to occur on the client side. The CNRS (Centre National de la Recherche Scientifique) took out a patent on C-SDA. A JavaCard prototype has been developed partly with the support of the French ANVAR agency (Agence Nationale pour la VAlorisation de la Recherche) and has been demonstrated at the VLDB’03 conference. Developed with F. Dang Ngoc, L. Wu and P. Pucheral.

[So2]     PicoDBMS: PicoDBMS is a smart card full-fledged DBMS aiming at managing shared secured portable folders. A first prototype written in JavaCard has been demonstrated at the VLDB’01 conference. It showed the feasibility of the approach but exhibited disastrous performance. Since then, a second prototype has been written in C and optimized partly with the help of Axalto. This prototype is now running on an experimental smart card platform and exhibits two order of magnitude better performances than its JavaCard counterpart. A cycle-accurate hardware simulator allowed us to predict the PicoDBMS performance on future smart card platforms. Extensive experimentations have been conducted recently on this prototype thanks to a dedicated Pico Database Benchmark. Developed with N. Anciaux and P. Pucheral.

[So1]     DBS3, a parallel DBMS: design and implementation of the transactional parallel kernel (20,000 lines of code) and optimization of the execution engine of DBS3, developed within the European project ESPRIT-II EDS (European Declarative System). DBS3 was subsequently used for performance measures by members of the project. The main difficulty resided in the support of a high degree of parallelism and concurrence among the transactional and decisional query load (a multi-version protocol was used). Developed with P. Casadessus.

4       Scientific activities ()

HDR Tutor

[Hd1]    2014: HDR tutor of Nicolas Anciaux ‘Gestion de données personnelles respectueuse de la vie privée’ defended in December 2014 at the University of Versailles.

PhD Supervision

[Ph12]   2015- …: Co-supervising (50% with I. Sandu Popa) the PhD of Julien Loudet entitled "Personal Queries on Personal Clouds". The general idea is to allow secure execution of distributed queries on a set of personal clouds associated to users, depending on social links, user's localization or user's profile.

[Ph11]   2013- …: Co-supervising (with Benjamin Nguyen) the PhD of Athanasia Katsouraki entitled “Access and usage control for personal data in Trusted Cells” started in October 2013.

[Ph10]   2012- 2015: Co-supervised (with Philippe Bonnet) the PhD of Niv Dayan entitled “Modelling and Managing SSD Write-Amplification”, defended in August 2015 at IT University of Copenhagen.

[Ph9]     2011-2015: Co-supervised (with Philippe Bonnet) the PhD of Matias Bjørling entitled “Operating System Support for High-Performance Solid State Drives”, defended in August 2015 at IT University of Copenhagen.

[Ph8]     2009-2012: Co-supervised (with Nicolas Anciaux) the PhD of Lionel Le Folgoc entitled “Personal Data Server Engine: Design and Performance Considerations”, defended in December 2012 at the University of Versailles.

[Ph7]     2008-2011: Co-supervised the PhD of Yanli Guo (with P. Pucheral and Nicolas Anciaux) entitled “Confidentiality and Tamper-Resistance of Embedded Databases”, defended in December 2011 at the University of Versailles.

[Ph6]     2005-2010: Co-supervised (with P. Pucheral) the PhD of Mehdi Benzine entitled ‘Combinaison Sécurisée de Données Publiques et Sensibles dans les Bases de Données’ – defended in October 2010 at the University of Versailles.

[Ph5]     2002-2006: Co-supervised (with P. Pucheral) the PhD of François Dang-Ngoc entitled ‘A Secure Access Controller for XML Documents’ defended in February 2006 at the University of Versailles. François was the recipient of the Accessit of the PhD Thesis Award’2007 delivered by ASTI (Fédération des Associations en Sciences et Technologies de l'Information) in the category ‘Applications’.

[Ph4]     2000-2004: Co-supervised (with P. Pucheral) the PhD of Nicolas Anciaux entitled ‘Database Systems on Chip’ defended in December 2004 at the University of Versailles.

[Ph3]     2000-2001: Co-supervised (with P. Valduriez) the PhD of Ioana Manolescu: "Optimization techniques for integration distributed, heterogeneous data sources", defended in December 2001 at the University of Versailles. Study of efficient data and function integration using "binding patterns". This research started in September 2000, and corresponds roughly to a third of Ioana's PhD.

[Ph2]     1999-2000: Co-supervised (with P. Valduriez) the PhD of Fabio Porto: "Strategies for parallel execution of queries in distributed scientific databases", defended in Brazil (PUC Rio) in April 2001. Study of the optimization and parallelization of queries involving important data transfers and functions. Supervision during the year that Fabio has spent at INRIA.

[Ph1]    1997-1998: Co-supervised (with P. Valduriez) the PhD of Olga Kapitskaia: "Query processing in distributed data integration systems", defended in November 1999 at University of Versailles.

PhD Reviewer  ()

[PR11]   2014: PhD Reviewer of Pierre Olivier: Estimation de performances et de consommation énergétique de systèmes de stockage à base de mémoire flash dans les systèmes embarqués, University of Bretagne Sud.

[PR10]   2012: PhD Reviewer of Toufik Sarni: ‘Vers une Mémoire Transactionnelle Temps Réel’, University of Nantes.

[PR9]    2012: PhD Reviewer of Stéphane Jacob: ‘Protection Cryptographique des Bases de Données : Conception et Cryptanalyse’, University of Paris 6.

[PR8]    2011: PhD Reviewer of Brice Chardin: ‘SGBD Open Source pour Historisation de Données et Impact des Mémoires Flash’, INSA de Lyon.

[PR7]    2010: PhD Reviewer of Wenceslao Palma: ‘Continuous Join Query Processing in Structured P2P Networks’, University of Nantes.

[PR6]    2007: PhD Reviewer of Reza Akbarinia: ‘Data Access Techniques in P2P Systems’, University of Nantes.

[PR5]    2007: PhD Reviewer of Jean Michel Busca: ‘Pastis : un système pair à pair de gestion de fichiers’, University of Paris 6.

[PR4]    2007: PhD Reviewer of Bogdan Cautis: ‘Signing and Reasoning about Tree Updates’, University of Paris Sud.

[PR3]    2007: PhD Reviewer of Thierry Sans: ‘Specifying and Deploying a Security Policy in Next Generation Information Systems’, ENST Bretagne.

[PR2]    2006: PhD Reviewer of David Coquil: ‘Conception et mise en œuvre de proxies sémantiques et Coopératifs’ – INSA Lyon.

[PR1]    2001: PhD Reviewer of Richard Wong: ‘Parallel Evaluation of Very Large Database Queries’. Griffith University.

Jury member  ()

[Ju14]   2015: Jury member for the PhD defense of Niv Dayan ‘Modelling and Managing SSD Write-Amplification’, IT University of Copenhagen.

[Ju13]   2014: Jury member for the PhD defense of Andres Fernando Sanoja Vargas ‘Segmentation de Pages Web, Évaluation et Applications’, Pierre et Marie Curie University (Paris 6).

[Ju12]   2014: Jury member for the PhD defense of Jean-Pierre Lozi ‘Towards More Scalable Mutual Exclusion for Multicore Architectures’, Pierre et Marie Curie University (Paris 6).

[Ju11]   2014: Jury member for the PhD defense of Matteo Casalino ‘Approches pour la Gestion de Configurations de Sécurité dans les Systèmes d’Information Distribués’, Claude Bernard University (Lyon).

[Ju10]   2012: Jury member for the PhD defense of Lionel Le Folgoc: ‘Personal Data Server Engine: Design and Performance Considerations’, University of Versailles.

[Ju9]     2012: Jury member for the PhD defense of Hien Thi Thu Truong: ‘A Contract-based and Trust-aware Collaboration Model’, University of Lorraine.

[Ju8]     2011: Jury member for the PhD defense of Yanli Guo : ‘Confidentiality and tamper-resistance of embedded databases’, University of Versailles.

[Ju7]     2010: Jury member for the PhD defense of Bhaskar Biswas: ‘Implementational aspects of code-based cryptography’, Ecole Polytechnique.

[Ju6]     2010: Jury member for the PhD defense of Mehdi Benzine : ‘Combinaison sécurisée de données publiques et sensibles dans les bases de données’, University of Versailles.

[Ju5]     2009: President of the Jury for the PhD defense of Julien Cordry: ‘La Mesure de Performance dans les Cartes à Puce’, CNAM, Paris.

[Ju4]     2006: Jury member for the PhD defense of David Bromberg: ‘Résolution de l'hétérogénéité des intergiciels d'un environnement ubiquitaire’, University of Versailles.

[Ju3]     2005: Jury member for the PhD defense of François Dang Ngoc: ‘Client-Based Access Control for XML documents’, University of Versailles.

[Ju2]     2004: Jury member for the PhD defense of Nicolas Anciaux: ‘Database Systems on Chip’, University of Versailles.

[Ju1]     2001: Jury member for the PhD defense of Laurent Némirovski: ‘Support pour l’optimisation de requêtes’, Paris I University.

Program and Organization committee member ()

Program committee Chair



Bases de données avancées (BDA)

Demonstrations Chair



Int. Conf. on Management of Data (ACM SIGMOD)

Program committee member ()



Int. Conf. on Extending Database Technology (EDBT)



Int. Conf. on Extending Database Technology (EDBT)



Workshop on Personal Data Analytics in the Internet of Things (PDA@IOT)



IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC)



Bases de Données Avancées (BDA)



Int. Conf. on Extending Database Technology (EDBT)



Int. Conf. on Data Engineering (ICDE)



Int. Workshop on Big Data Management on Emerging Hardware (HardBD)



Bases de Données Avancées (BDA)



Int. Conf. on Mobile Web Information Systems (MobiWIS)



Int. Conf. on Extending Database Technology (EDBT)



Int. Conf. on Very Large Data Bases PhD Workshop (VLDB PhD)



Int. Conf. on Mobile Web Information Systems (MobiWIS)



DASFAA workshop on Flash-based Database Systems (FlashDB)



Int. Conf. on Very Large Data Bases (VLDB)



Int. Conf. on Extending Database Technology (EDBT)



Int. Conf. on Financial Cryptography (FC)



Int. Conf. on Management of Data (ACM SIGMOD)



Int. Conf. on Management of Data (ACM SIGMOD) – Tutorials



Int. Conf. on Extending Database Technology (EDBT) – Demonstrations



Bases de Données Avancées (BDA)



Journées Francophones Mobilité et Ubiquité (UbiMob)



Int. Conf. on Management of Data (ACM SIGMOD) – Demonstrations



Int. Conf. on Information and Knowledge Management (ACM CIKM)



Int. Conf. on Mobile Data Management (MDM)



Symp. on Information, Computer and Communications Security (ACM ASIACCS)



Int. Conf. on Database Systems for Advanced Applications (DASFAA)



Gestion de données dans les systèmes d’information pervasifs (GEDSIP)