# «Abstract. Security policies are rules aimed at protecting the resources of an organisation from the risks associated with computer usage. Designing, ...»

A Tool for Managing Security Policies in Organisations

A. V. Álvarez1 and R. Monroy1 and J. Vázquez2

1Tecnológico de Monterrey, Campus Estado de México,

Km 3.5 Carretera al Lago de Guadalupe,

Col. Margarita Maza de Juárez, Atizapán de Zaragoza,

Estado de México, Mexico, 52926

Tel: +52 55 5864 5751

{A00472185,raulm}@itesm.mx

Banco de México

Avenida 5 de Mayo, Col. Centro, Del. Cuauhtémoc

México D.F., Mexico, 06059

jjvazquez@banxico.org.mx

Abstract. Security policies are rules aimed at protecting the resources of an organisation from the risks associated with computer usage. Designing, implementing and maintaining security policies are all error prone and time consuming. We report on a tool that helps managing the security policies of an organisation. Security policies are formalised using first-order logic with equality and the unique names assumption, closely following the security policy language suggested in [1]. The tool includes a link to an automated theorem prover, Otter [2], and to a model finder, Mace [2], used to formally verify a set of formal security policies. It also includes a GUI and a number of links to read information and security policies from organisation databases and access control lists.

1 Introduction Security policies are rules that prescribe how to manage IT resources in order to protect them from risks associated with computer usage. If properly defined, they help ensuring the goals of computer security, namely: integrity, confidentiality, and availability of IT resources.

Crafting proper security policies is a very difficult task, for two reasons. First, security policies may be ambiguous or clash one another, therefore they ought to be formalised. This formalisation, however, is tedious and error-prone. Second, policies quickly become obsolete, thus maintaining security policies is never-ending and also time-consuming. This situation prompts the construction of a tool that could help a user to readily design and develop proper security policies.

**This paper reports on a security policy manager, called e-policy manager, portraying the following features:**

1. It captures security policies using a language similar to the security policy language L7, defined by Halpern and Weissman [1]. Our language has a clear and precise semantics, namely: first-order logic with equality, together with the unique names assumption.

2. It reasons about security policies simultaneously running Otter [2], a first-order theorem prover, and Mace [3], a first-order model finder.

3. It eases the capture of security policies, reading information from different sources, including database systems, access control lists, etc. E-policy manager also comes with a GUI for human interaction.

Currently, e-policy manager deals only with security policies concerned with the protection of information.

Paper overview The rest of this paper is organised as follows: In Section 2, we describe how to express security policies using first-order logic with equality and the unique names assumption. We also introduce the verification tasks we are interested in. In Section 3, we recapitulate Halpern and Weissman’s results to characterise the computational complexity of our verification tasks. In Section present a graphical user interface for easily capturing security policies. In Section 4, we show how to apply the pair Otter and Mace to conduct the verification process. In Section 5, we describe how to use epolicy manager. In Section 6, we report on the results we have obtained from a psychological validity test carried on the e-policy manager. In Section 7, we compare related work and in Section 8, we present the conclusions drawn from our research.

**2 Expressing Security Policies Using First-Order Logic**

Hereafter, we assume the reader is familiar with the syntax of first-order logic, including terms, atoms, literals and well-formed formulae, the semantics of first-order logic, including models and valuations, and with validity and satisfiability of firstorder formulae.

In this paper, security policies are mainly of either of two types: permitting or denying. A permitting (respectively denying) security policy conveys the conditions under which someone, the subject, is allowed (respectively forbidden) to perform an action on some object. Accordingly, the vocabulary of our language is assumed to contain at least four collection of predicate relations, one denoting subjects (agents, processes, officers, etc.), one denoting objects (files, directories, databases, applications, etc.), one denoting actions (read, write, execute, etc.), and another denoting constraints (roles, etc.) The vocabulary also contains a reserved binary predicate, called permitted. The literal permitted(S, A) means S, a term of type subject, is allowed to carry out A, a (compound) term representing an action over some term of type object.

**A security policy is a sentence of the form:1**

∀X1:T1,…, Xn:Tn. (C → [¬]permitted(S, A)) (1) where C is a conjunction of literals, S and A are terms, and where [¬]permitted(S, A) indicates that permitted may or may not be negated. A policy of the form (1) is called a standard policy [1]. Standard policies are generally enough to express most security We sometimes find it convenient to abbreviate ∀X. (T(X) → P(X)) and ∃X. (T(X) ∧ P(X)) by ∀X:T. P(X) and ∃X:T. P(X), respectively.

policies. For example, the policies “only security officers may edit the password file”, “anyone who is allowed to edit a file may read it”, “anyone who is forbidden to read a file may not edit it” and “employees may read all the information associated with

**their department of affiliation” can be expressed as follows:**

∀X:staff, Y:posts. (post(X, officer, sec) → permitted(X, write(passwords))) ∀X:staff, Y:posts. (¬post(X, officer, sec) → ¬permitted(X, write(passwords))) ∀X:staff, F:info. (permitted(X, write(F)) → permitted(X, read(F))) ∀X:staff, F:info. (¬permitted(X, read(F)) → ¬permitted(X, write(F))) ∀X:staff,Y:posts,Z:dpt,F:info.(post(X,Y,Z)∧blng2(F,Z)→permitted(X,read(Z))).

The environment is a non-empty set of relevant facts describing the organisation [1]. “John is a manager”, “managers are employees”, “file F is of security clearance l”, etc. are all example facts of a typical environment. Formally, an environment is a set of sentences, each of which does not contain the permitted predicate. The environment is said to be standard, if it can be partitioned into two sets, E0 and E1, where E0, called a basic environment, contains only ground literals, while E1 contains only universally quantified formulae.

The verification tasks we are interested in are formalised as follows: Let E and P1,…,Pn respectively denote an environment and some policies and let S and A be

**closed terms, then we want to address the following queries:**

1. Is individual S allowed (respectively forbidden) to carry out action A?

This query amounts to ask whether E ∧ P1 ∧ … ∧ Pn permitted(S, A) (respectively ¬permitted(S, A)) is valid.

2. What is the individual profile of individual S? This query amounts to ask whether, and how many times, E∧P1∧…∧Pn permitted(S, X) ∨ ans(X) is valid, where X is an answer literal for the associated fill-in the blank question.

3. Are the policies consistent amounts to ask if E ∧ P1 ∧ … ∧ Pn is satisfiable.

4. Are the policies Bell-LaPadulla compliant? This query amounts to ask if E ∧ P1 ∧ … ∧ Pn satisfies the simple security condition, clearance(S) ≤ clearance(O) permitted(S, read(O)), and the *-property, permitted(S, write(O)) clearance(S) ≤ clearance(O), of the Bell-LaPadulla model.

**3 Tractability Results**

Halpern and Weissman have shown that the problem in which we are interested is in general undecidable [1]. For the decidable part, it cannot be answered efficiently unless we impose severe restrictions. Halpern and Weissman have, in particular,

**shown that the language L7 is the most amenable to computation:**

** Theorem 4.3, [1]: Let φ be a vocabulary that contains permitted (and possibly other predicate, constant, and function symbols).**

Let L7 consists of all closed formulas F in LF0(φ) of the form E0 ∧ E1 ∧ P1 ∧ … ∧ Pn permitted(S, A), where E0 is a Basic environment, E1 is a conjunction of universal formulas, P1 ∧ … ∧ Pn is a conjunction of standard policies, and both S and A are closed terms of the appropriate sort, such

**that:**

1. E0 has m constants,

2. no conjunct in E1 ∧ P1 ∧ … ∧ Pn has an inequality in its antecedent, and

3. each conjunct in E1 ∧ P1 ∧ … ∧ Pn has at most one literal that is bipolar in E1 ∧ P1 ∧ … ∧ Pn relative to the equality statements in E0.2 We can determine the validity of a query of type 1 in O(|E1 ∧ (P1 ∧ … ∧ Pn)|log|E1 ∧ (P1 ∧ … ∧ Pn)| + b|Cl| + T) where b is the number of bipolar pairs in F relative to the equality statements in E0, Cl is the longest conjunct in F, and T depends on the number of variables that appear as an argument to an instance of permitted (see [1] for further details).

Unfortunately, our security policies are not part of L7, because they might not satisfy condition 2. We have been forced to include inequalities in the antecedent of a policy for the sake of completeness. We illustrate this by means of a simple example.

Consider again the set of security policies defined in the previous section. To decide whether these policies permit (respectively forbid) anna to edit the password file, we must know if the statement post(anna, officer, sec) is true (respectively false). But if anna is a head of department, neither of these queries can be decided unless we explicitly assert ¬post(anna, officer, sec).

To get around this incompleteness issue, we adopt a conservative meta-rule: any action is forbidden, unless it is explicitly allowed. This meta-rule is implemented by (automatically) including the closure of every permitting security policy. The closure of a permitting policy P, denoted R(P), is the smallest set of denying policies that are

**logically consistent with P. Consider, for example, the policy “all head of departments are permitted to read information that is classified as confidential (clearance(F) = 4)”, in symbols:**

∀X:staff, Z:dpt. (post(X,mgr,Z) )∧clearance(F)=4→permitted(X,read(F)))

**The closure of this policy is given by:**

∀X:staff, Y:posts, Z:dpt. (post(X, Y, Z) ∧ Y ≠ mgr ∧ clearance(F) = 4 → ¬permitted(X,read(F))) which is then complemented with another policy stating that “anyone who is forbidden to read a file may not edit it or share it or print it (see Section 3).

The unique names assumption is used to establish the inequality of two objects with distinct names. Unfortunately, R(P) cannot be characterised logically, only procedurally.

A literal l is said to be bipolar in a formula F, written in conjunctive normal form, if l is in F, and if there is another literal l’ in F such that l and ¬l’ unify; that is, ∃σ. (l’≡¬l’)σ. If (l’≡¬l’)σ follows from a set E of equality statements, then l is said to be bipolar in a formula F relative to E.

Even though our security policies cannot be accommodated in L7, our experiments show that our 4 verification tasks can be carried out quickly, as discussed below.

**4 Reasoning about Security Policies**

Once captured, both the policies and the environment are given to an automated first-order theorem prover. We have chosen to use Otter [2], since it is wellestablished. Otter’s main inference rules are resolution and paramodulation. Resuolution is well-known to be refutation complete: if a formula is unsatisfiable then resolution will eventually deduce the proposition false, the empty clause.

First-order logic is semi-decidable: if a formula is satisfiable, then the resolution procedure may not terminate. To partly approach this problem, we simultaneously apply Mace, models and counter-examples, a searcher for finite models of first-order

**and equational statements [3]. Mace serves as a complementary companion to Otter:**

given an input first-order conjecture, Otter will search for a proof and Mace will search for a counter-example. Mace’s engine is a Davis-Putnam-Loveland-Logemann propositional decision procedure.

Our 4 verifications tasks are then tackled as follows. Let E0 be a conjunction of ground formulae, E1 be a conjunction of universal formulas, P1 ∧ … ∧ Pn be a conjunction of standard policies, and let S and A be closed terms representing a subject

**and an action over some object, respectively. Then:**

1. Is individual S allowed to carry out action A amounts to give both Otter and Mace E0 ∧ E1 ∧ P1 ∧ … ∧ Pn ∧ ¬permitted(S, A). If the conjecture is a theorem, Otter will hopefully deduce the empty clause; otherwise, Mace will hopefully find a counter-example. We proceed similarly to verify whether S is forbidden to carry out action A.

2. To determine the individual profile of an individual S, we give Otter the conjecture E0 ∧ E1 ∧ P1 ∧ … ∧ Pn ∧ (¬permitted(S, X) ∨ ans(X)) and ask it to find as many proofs as possible.

3. Are the policies consistent amounts to give both Otter and mace the conjecture E0 ∧ E1 ∧ P1 ∧ … ∧ Pn. If Otter deduces the empty clause, we use the proof to automatically hint the user which security policies are thought to be in conflict. Otherwise, Mace will hopefully find a counterexample.

4. Whether the security policies are Bell-LaPadulla compliant amounts to giving Otter and Mace two formulae: i) for the simple security condition E0 ∧ E1 ∧P1 ∧ … ∧Pn ∧ clearance(S) ≤ clearance(O) ∧¬permitted(S, A), and ii) for the *-property, E0 ∧ E1 ∧P1 ∧ … ∧Pn ∧¬permitted(S, A) ∧ ¬(clearance(S) ≤ clearance(O)).

In our experiments, Otter was able to quickly find inconsistencies in the input security policies, (deriving the empty clause), if there was any, but usually spent a while, otherwise. Notice that, if the input security policies are not inconsistent, then Otter may run forever.