Abstract

We treat a challenging problem of confidentiality-preserving data publishing: how to repeatedly update a released weakened view under a modification of the input parameter values, while continuously enforcing the confidentiality policy, i.e., without revealing a prohibited piece of information, neither for the updated view nor retrospectively for the previous versions of the view. In our semantically ambitious approach, a weakened view is determined by a two-stage procedure that takes three input parameters: (i) a confidentiality policy consisting of prohibitions in the form of pieces of information that the pertinent receiver of the view should not be able to learn, (ii) the assumed background knowledge of that receiver, and (iii) the actually stored relation instance, or the respective modification requests. Assuming that the receiver is aware of the specification of both the underlying view generation procedure and the proposed updating procedure and additionally of the declared confidentiality policy, the main challenge has been to block all meta-inferences that the receiver could make by relating subsequent views.

Extended Abstract

Within a framework of cooperating with partners and sharing resources with them, managing the fundamental asset of own information – whether personal or institutional – has evolved as a main challenge of IT-security, leading to diverse computational techniques to enforce all kinds of an owner's interests. This includes confidentiality-preserving data publishing aiming at hiding specific pieces of information while still providing sufficient availability. One class of techniques for confidentiality-preserving data publishing distorts data by weakening the still true information content of released data, e.g., by explicitly erasing sensitive data or by substituting sensitive data items by suitably generalized ones, as for instance applied for k-anonymization with l-diversification.

Whereas the effectiveness of many such techniques relies on the appropriateness of more or less intuitive concepts, like, e.g., quasi-identifiers, our own approach has more ambitiously been based on a fully formalized notion of semantic confidentiality in terms of inference-proofness. This notion considers an authorized receiver that profits from some background knowledge and unlimited computational resources for rational reasoning. More specifically, in previous work we conceptually designed a two-stage view generation procedure that weakens the information content of an actually stored relation instance, and we verified the requested confidentiality property and experimentally evaluated the runtime efficiency. This procedure takes three input parameters, (i) a confidentiality policy consisting of prohibitions in the form of pieces of information that the pertinent receiver of the view should not be able to learn, (ii) the assumed background knowledge of that receiver in the form of single-premise tuple-generating data dependencies, and (iii) the actually stored relation instance.

In the present work we address and solve the problem of efficiently updating a released weakened view under a modification of the input parameter values, while continuously enforcing the confidentiality policy, i.e., without revealing a prohibited piece of information, neither for the updated view nor retrospectively for the previous versions of the view. Conservatively assuming that the receiver is aware of the specification of both the view generation procedure and the updating procedure and, additionally, of the declared confidentiality policy – and thus of the whole security configuration consisting of the policy and the background knowledge – the main challenge has been to block all meta-inferences that the receiver could draw by relating subsequent views. The wanted blocking is achieved by establishing sufficient indistinguishability between the actual, possibly harmful situation and a fictitious harmless situation.