程序代写CS代考 scheme database compiler file system interpreter Chapter 1: Introduction – cscodehelp代写

Chapter 1: Introduction

Fall 2021
Cp 633
Slide #6-*
Chapter 6: Integrity Policies
This lecture is taken from Chapter 6 in section 6.1, 6.2, 6.4, 6.5 but only first pages of 6.5.1 and 6.5.2
Overview
Requirements
Very different than confidentiality policies
Biba’s models
Clark-Wilson model
Trust model

Cp 633
*

Fall 2021
Cp 633
Slide #6-*
Requirements of Policies
Users will not write their own programs, but will use existing production and databases.
Programmers will develop and test programs on a non-production system;
if they need access to actual data, they will be given production data via a special process, but will use it on their development system.

A special process must be followed to install a program from the development system onto the production system.
The special process in requirement 3 must be controlled and audited.
The managers and auditors must have access to both the system state and the system logs that are generated.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Principles of integrity policies in commercial systems
Separation of duty
If critical operation has two or more steps, at least two different people should perform the steps
Example: Moving a program from development system to production system. Person who developed program can not install the program on the production system.
Instead, installer must certify that program works correctly.
The developer’s fraud can pass only if installer did not test the program correctly or is in collusion with the installer.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Principles of integrity policies in commercial systems cont.
Separation of function
Developers don’t develop new programs on production system, since this is potential threat to production data.
Developers don’t use production data on the development system.
Instead some sanitized data may be used.
Development environment must be as similar as possible to the actual production environment.
E.g. known problem is how to obtain test data for developing intrusion detection systems?

Cp 633

Fall 2021
Cp 633
Slide #6-*
Principles of integrity policies in commercial systems cont.
Auditing
Is the process of analyzing systems to determine what actions took place and who performed them.
Auditing depends on extensive logging.
Auditing is important when program is moved from development system to production system since integrity mechanisms don’t constrain the certifier.
Problem of information aggregation.
Databases with sensitive info allow small amount of data to be gathered, but aggregation is possible, and has to be prevented.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Biba Integrity Model
Basis for all 3 models but we cover only Strict Integrity Policy:
Set of subjects S, objects O, and I, integrity levels
relation ≤  I  I holds when second integrity level dominates the first or is the same.
Function min: I  I  I returns lesser of integrity levels
Function i: S  O  I returns integrity level of entity
Relations:
r  S  O means that s  S can read o  O
w  S  O means that s  S can write in o  O
x  S  S means that s  S can invoke (execute) s1  S

Cp 633

Fall 2021
Cp 633
Slide #6-*
Intuition for meaning of integrity levels is related to trust
The higher the level, the more confidence
That a program will execute correctly
That data is accurate and/or reliable
Note the relationship between integrity and trustworthiness
Important point: integrity levels are not security levels from Bell- LaPadula (BLP) model.
Security labels from BLP primarily limit the (illicit) flow of information
Integrity labels primarily inhibit the (unauthorized) modification of information.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Strict Integrity Policy
Similar to Bell-LaPadula model and has the rules

s  S can read o  O iff i(s) ≤ i(o)
– i.e. process s doesn’t want to get ‘contaminated’
s  S can write to o  O iff i(o) ≤ i(s)
– i.e. process s can not contaminate other object
s1  S can execute s2  S iff i(s2) ≤ i(s1)
Add compartments and discretionary controls to get full dual of Bell-LaPadula model with “no read down” and “no write up”.
Term “Biba Model” refers to this, although there are also low water mark policy and ring policy.

Cp 633

Fall 2021
Cp 633
Slide #6-*
LOCUS operating system was implemented after Biba strict integrity model
Goal: prevent untrusted software from altering data or other software – limit execution domain for each program
Approach in LOCUS: make levels of trust explicit
credibility rating (Biba integrity level) of program is based on estimate of software’s trustworthiness (0 untrusted, n highly trusted)
trusted file system contains software with a single credibility level
User (process) has risk level which is the highest credibility level at which process can execute smaller or equal to i(s1)
Note that in Biba s1  S can execute s2  S iff i(s2) ≤ i(s1)
User can execute programs with credibility level at least as great as user’s risk level so risk level determines i(s2).
Process must use run-untrusted command to run software at lower credibility level

Cp 633

Fall 2021
Cp 633
Slide #6-*
Clark- Model
Integrity defined by a set of constraints
Data in a consistent or valid state when it satisfies these constraints
Example: Bank
D today’s deposits, W withdrawals, YB yesterday’s balance, TB today’s balance
Integrity constraint: TB = D + YB – W
Well-formed transaction move system from one consistent state to another
Issue: who examines and certifies that transactions are done correctly?

Cp 633

Fall 2021
Cp 633
Slide #6-*
Entities
CDIs: constrained data items
Data subject to integrity controls, e.g. account balance
UDIs: unconstrained data items
Data not subject to integrity controls, e.g. gifts for accounts
TPs: transaction procedures
Procedures that take the system from one valid state to another, e.g. changing account balance.
IVPs: integrity verification procedures
Procedures that test the CDIs conform to the integrity constraints, e.g. checking that account is balanced
IVPs check whether TPs were done correctly
e.g. invoice for payment comes to the company, who checks the invoice and who signs the cheque?

Cp 633

Fall 2021
Cp 633
Slide #6-*
Certification Rules 1 and 2
CR1 When any IVP is run, it must ensure all CDIs are in a valid state
CR2 For some associated set of CDIs, a TP must transform those CDIs in a valid state into a (possibly different) valid state
Defines relation certified that associates a set of CDIs with a particular TP, TP must be certified to work over CDI.
Example: TP is balance, CDIs are accounts, in bank example
CR2 implies that that a TP may corrupt a CDI if it is not certified to work on that CDI.
E.g. TP for stock investment should not work on bank account. Therefore enforcement rules are needed.

Cp 633

*

Fall 2021
Cp 633
Slide #6-*
Enforcement Rules 1 and 2 or which programs (TPs) can access CDIs and who can run them
ER1 The system must maintain the certified relations and must ensure that only TPs certified to run on a CDI manipulate that CDI.
ER2 The system must associate a user with each TP and set of CDIs. The TP may access those CDIs on behalf of the associated user. The TP cannot access that CDI on behalf of a user not associated with that TP and CDI.
System must maintain and enforce certified relation
System must also restrict access based on user ID (‘allowed’ relation), i.e. triple (user, TP, {CDI set}) must be defined and allowed.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Users and Rules
Further on maintaining ‘allowed’ relation (user, TP, {CDI set}) :
CR3 The allowed relations must meet the requirements imposed by the principle of separation of duty.
This can be achieved by using authentication of users.
ER3 The system must authenticate each user attempting to execute a TP
Type of authentication is undefined in the model, and depends on the instantiation
Authentication is not required before the use of the system, but is required before manipulation of CDIs (requires using TPs)

Cp 633

Fall 2021
Cp 633
Slide #6-*
Logging –
In most transaction-based systems, each transaction is logged and reviewed by an auditor.
In Clark-Wilson model, a log is also CDI since TPs append to log, and no TP can overwrite the log.
CR4 All TPs must append enough information to reconstruct the operation to an append-only CDI.
No TP can overwrite (delete) the log.
Auditor needs to be able to determine what happened during reviews of transactions

Cp 633

Fall 2021
Cp 633
Slide #6-*
Handling Untrusted Input
TP which deals with UDI must first run IVP:
CR5 Any TP that takes as input an UDI may perform only valid transformations, or no transformations, for all possible values of the UDI. The transformation either rejects the UDI or transforms it into a CDI.
E.G. at ATM info about deposited money is UDI and becomes CDI after being counted.
In bank, numbers entered at keyboard are UDIs, so cannot be input to TPs. TPs must validate numbers (to make them a CDI) before using them;
if validation fails, TP rejects UDI

Cp 633

Fall 2021
Cp 633
Slide #6-*
Separation of Duty In Model – up to here 16 Sept.
More on allowed relation (user, TP, {CDI set}) . Who can define users? Who can define CDI set?
We need one more rule to enforce integrity of ER2 and ER3.
User can not be owner of TP (ER2) and can not be in the access list of CDIs (ER3).

ER4 Only the certifier of a TP may change the list of entities associated with that TP.
However, no certifier of a TP, or of an entity associated with that TP, may ever have execute permission with respect to that entity (TP).
ER4 enforces separation of duty with respect to certified and allowed relations.

Cp 633

Fall 2021
Cp 633
Slide #6-*
About Clark-Wilson model
Commercial firms do not use multilevel integrity scheme to classify data. Instead they use separation of duty.
The notion of certification is different from the notion of enforcement.
E.g. rule ER4 is beyond certification.
However, certification rules require outside intervention and certifiers make assumptions about what can be trusted.
This can be the weakness of the system.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Comparison With Requirements
(Users will not write their own programs, but will use existing production programs and databases.) Users can’t certify TPs, so CR5 and ER4 enforce this.
(Programmers will develop and test programs on a non-production system.) This is procedural, so model doesn’t directly cover it; but special process corresponds to using TP
No technical controls can prevent programmer from developing program on production system; usual control is to delete software tools (compilers, interpreters).
Procedural equivalent of TP must be used to supply sanitized production data to the test system.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Comparison With Requirements
(A special process must be followed to install a program from the development system onto the production system ).
TP does the installation
and trusted personnel do certification of that TP

Cp 633

Fall 2021
Cp 633
Slide #6-*
Comparison With Requirements
4. (The special process in requirement 3 must be controlled and audited.) CR4 provides logging; ER3 authenticates trusted personnel doing installation; CR5, ER4 control installation procedure
New program is UDI before certification, and becomes CDI (and TP) after certification.
(The managers and auditors must have access to both the system state and the system logs that are generated.) Log is CDI, so appropriate TP can provide managers, auditors access.
Access to state handled similarly

Cp 633

Fall 2021
Cp 633
Slide #6-*
Comparison of CW to Biba
In Biba

No notion of certification rules; trusted subjects ensure that actions (their function calls) obey rules.
In Biba no mechanism verifies the trusted entities or their actions.
In Biba trusted entity (security officer) examines untrusted data before being made trusted.
In Clark- of a. and b. Explicit requirements that actions must meet
Instead of c. Trusted entity must certify method to upgrade untrusted data (and not certify the data itself).

Cp 633

Fall 2021
Cp 633
Slide #6-*
UNIX Implementation
Consider “allowed” relation (user, TP, { CDI set })
First question: How to enforce that only allowed real users run the TP?
Answer: Each TP executable is owned by a different phantom user.
These “phantom users” are actually locked accounts, so no real users can log into them; but this provides each TP an unique UID for controlling access rights.
TP executable is setuid to its owner i.e. to that phantom user.
Each TP’s has a group owner which contains set of real users authorized to execute that TP.
TP is executable by the group and not executable by the world.

Cp 633

Fall 2021
Cp 633
Slide #6-*
UNIX deployment – CDI Arrangement
Second question: How to restrict the TP to access only allowed CDI-s in (user, TP, { CDI set }) ?
Answer: CDIs must be owned by root or some other unique phantom user.
Again, no logins to that phantom user’s account are allowed.
CDI’s group owner contains (phantom) owners of TPs allowed to manipulate CDI.
Now, when real user executes TP, he/she will be setuid to phantom owner of TP, and that owner is in the access group of particular CDI.
Therefore, each TP can manipulate CDIs for single user.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Examples – special cases
We look into triple: (user, TP, { CDI set })
Special case a: Access to CDI is constrained only by user:
In “allowed” triple, TP can be any TP without setuid feature so it is run with real user ID.
The group owner of the CDIs is a set of all users authorized to modify CDI.
Special case b: Access to CDI is constrained only by TP:
In “allowed” triple, user can be any user.
The (phantom) user u owning the TP is the member of group owning the CDI, therefore it can access the CDI.
Make the TP owned by u, setuid to u , but world executable.

Cp 633

Fall 2021
Cp 633
Slide #6-*
Problems
Consider “allowed” relation (user, TP, { CDI set }) where access is constrained by user and CDI set
Two different users (not belonging to the same group) cannot use same copy of TP to access two different CDI sets.
Need 2 separate copies of TP (one for each user and CDI set)
TPs are setuid programs
As these change privileges, we want to minimize their number.
root can assume identity of phantom users owning TPs, and so cannot be separated from certifiers.
No way to overcome this without changing nature of root

Cp 633

Trust Models
Integrity models state conditions under which changes of entities preserve a set of integrity properties.
So they deal with the preservation of trustworthiness
Trust models deal with confidence one can have in the initial values or settings
So they deal with the initial evaluation of whether data can be trusted

Fall 2021
Cp 633
Slide 6-*

Cp 633

Definition of Trust
A trusts B if A believes, with a level of subjective probability, that B will perform a particular action, both before the action can be monitored and in a context in which it affects A’s own action.
Includes subjective nature of trust
Captures idea that trust comes from a belief in what we do not monitor
Leads to transitivity of trust

Fall 2021
CP 633
Slide 6-*

CP 633

Transitivity of Trust
Transitivity of trust: if A trusts B and B trusts C, then A trusts C
In practice this depends on A’s assessment of B’s judgment
Conditional transitivity of trust: A trusts C when
B recommends C to A;
A trusts B’s recommendations;
A can make judgments about B’s recommendations; and
Based on B’s recommendation, A may trust C less than B does
Direct trust: A trusts C because of A’s observations and interactions
Indirect trust: A trusts C because A accepts B’s recommendation

Fall 2021
CP633
Slide 6-*

CP633

Types of Beliefs Underlying Trust
Trust is cognitive property so only agents with goals and beliefs can trust other agents.
Trusting agent A has to estimate the risk and decide whether to trust agent B based on some of following belief types:
Competence: A believes B competent to aid A in reaching goal
Disposition: A believes B will actually do what A needs to reach goal
Dependence: A believes she a) needs what B will do, b) depends on what B will do, c) or it’s better to rely on B than not
Fulfillment: A believes goal will be reached
Willingness: A believes B has decided to do what A wants
Persistence: A believes B will not change B’s mind before doing what A wants
Self-confidence: A believes that B knows B can take the action A wants

Fall 2021
CP 633
Slide 6-*

CP 633

Evaluating Arguments about Trust (con’t)
Humanistic traits of trust that need to be translated to computing world:
Direct experience: A has personal experience dealing with B
Indirect experience: A has observed the evidence of B’s behavior.
Can come from B’s “expert opinion”, “authority” or “reputation”
Observations of B’s “moral nature” and “social standing”
Majority behavior: A’s belief that most people from B’s community are trustworthy
Prudence: Not trusting B poses unacceptable risk to A
Pragmatism: A’s current interests are best served by trusting B

Fall 2021
CP 633
Slide 6-*

CP 633

Trust Management
Use a language to express relationships about trust, allowing us to reason about trust
Evaluation mechanisms take data, trust relationships and provide a measure of trust about the entity or whether an action should or should not be taken
Two basic forms for trust evaluation
Policy-based trust management
Reputation-based trust management

Fall 2021
CP 633
Slide 6-*

CP 633

Fall 2021
Cp 633
Slide #6-*
Key Points
Integrity policies deal with trust
As trust is hard to quantify, these policies are hard to evaluate completely
Look for assumptions and trusted users to find possible weak points in their implementation
Biba, based on multilevel integrity
Clark-Wilson focuses on separation of duty and transactions

Cp 633
*

Leave a Reply

Your email address will not be published. Required fields are marked *