By Jessica Wang, Andrew Lumbang
Mainframes were developed in the 1960s. Some notable companies which made and sold mainframes were IBM and Fugits. Mainframes are data-intensive: a problem with mainframes was geting data to the right spot. Some positive qualities of mainframes are that they optimize data, and are very reliable. Below is a diagram of the general structure of a mainframe.
Mainframes improved as time went by, but they were still pretty expensive.
Were developed in the 1990s, as a cheaper alternative to mainframes, because huge mainframes were too expensive. Clusters weren't as reliable as mainframes, but the extra reliability of mainframes wasn't worth the price.
Clusters are essentially composed of linux boxes all connected on the same IP network, they can all communicate with each other across the IP network. The IP network has a speed on the order of Gb. Clusters became very popular, and nowadays most computing is done with clusters, instead of mainframes or clouds. Some notable cluster systems are Beowulf and SGE(Sun Grid Engine). One thing about clusters is that the individual boxes don't have to be the same, the machines being used in the cluster can be heterogeneous (typically x86-64)
Political Issues | Technical Issues |
Who controls the cloud? | Security |
Who pays for the cloud? | Resource management |
Continuing from the previous lecture, we now want to get something that is simpler and easier to manage and understand. Nevertheless, we still want the ability accurately prohibit bad accesses and allow good ones.
Techniques for doing so:
Traditional Unix
User Group Other
| r w x | r w x | r w x |
Original Unix
Berkeley Software Distribution (BSD)
***only ROOT can create groups***
Access Control Lists (ACLs)
An owner of a resource can specify an access list-a list of principles and their permissions. Typically used in Windows NT, Solaris, Samba, and now even Unix , ACLs add more flexibility but also complexity.
Ex. On a Solaris machine
$getfacl .
user: rwx
group: r_x
other: r_x
$setfacl .
Key Idea:
If you correctly set the default values of ACLs, when an object or resource is created, you will not have to use setfacl too often; the properties should have been correctly inherited. In other words, if you set root default accurately, all its inherited directories will have the same property.
Problem:
$ sudo
# cd /bad/guy
# ls
All you wanted was ability to inspect file, not to run some program.
Role-Based Access Control (RBAC)
Only really used in big popular products (Oracle, Solaris, and Active Directory)
Grants access to roles, not to the people
Ex. If you’re in backup role, you only get backup abilities. Also works for poweroff, change grades, etc.
RBAC has a table that tells us that for every user, which roles they can assume. Applications run c, but have li… However, the downside of this method is that it is too complicated and is not really used in private.
The cube is now:
Mechanisms for Enforcing Access Control
Neither of the approaches dominates over the other since both methods ensure unforgeability and has the OS checking all accesses. However, on a network basis, capabilities approach is preferred. In order to gain access, you need to send your credentials (they should be encrypted) over, which is exactly what capabilities method does. Even if you follow ALC approach, you would end up molding it to the capabilities one.
TRUSTED SOFTWARE
OS doesn’t trust users and consequently applications, which run on behalf of users. However, some programs do need to be trusted. One such program is login()
Running another process as another user is not a security breach in this case because setuid is a privilege syscall that checks if caller has access to change UID-has setuid bit set.
-r-sr-xr-x
s: when this file starts up running, it should start as owner (root) of file.
Since we have small software we trust, which ones do we trust?
How can we trust login?
Cryptographic checksum of the program
How does vendor trust login?
Look at login.c and confirm there are no dangerous parts
However, simply reading the source code does not a lways guarantee a working and safe program. In his paper, “Reflections on Trusting Trust”, Ken Thompson proves how only reading code does not ensure integrity by forcing the GCC to misbehave.
Looking at the source code, it seems perfect. There is not anything wrong with the c files. However, it can actually break any code. The only way to detect the bug is to disassemble the object code itself.
The trusted base should be as small as possible (according to K. Thompson, it’s bigger than you think) and should be kept secure. It contains the kernel and root.