Background: In Unix, this is done via process table with uid (user) and
gid (group). These ids are consulted for access decision. File access is
logged.
Possible issue: Should we have a new system call setuid(u,g)
?
There is one, and it operates under the following conditions:
$ su
Password: <enter root password>
# rm -rf /
___rwxr-xr-x
4 7 5 5
^setuid bit before rwx bits
$ cp /bin/sh /bin/su
/bin/su
to be
used for our shell instead$ ls -ld /bin
drwxrwxrwx
$ mv /bin/su /bin/su.old
$ mv mysu /bin/su
mysu
saves pass to a file, then runs su.old
.
The passwords are now stolen (in the file). The user will never notice
since su
runs as expected. Authorization deals with who has the right to do what. We can visualize
this with a 3-dimensional space.
Authorization is the ability to look at any spot in this space and
determine if it is allowed. Each point in the space is a boolean value
where 1 is permission is granted and 0 is not.
/u/class/fall14/cs111
rwx
access to this dircs111tas
. But this requires sysadmin
privileges. Not going to happen on SEASnet.ACLs are the standard approach to authorization. It is the idea of granting permission to an entire group of users, where the group can be created only by the root user.
One entry in a file's inode is dedicated to a pointer the the ACL block.
ACL block contains a list of users with permissions for this file.
Think: ACL provides exceptions to the three general
rules (user, group, world).
Problem: Add 1M users to ACL?
Solution: Don't do this
Problem: ACL is metadata and needs new system calls to edit
Solution: set/getfacl
commands
only file's owner can use these calls to edit ACLs
Problem: Buggy code for a certain program (e.g. grade book) will be able
to affect other parts of the filesystem that it shouldn't care about (e.g.
salaries). This is because a process inherits all rights of its user.
Solution: Role-based access control (RBAC). The idea of this is that each
user has a role, and through that role, they are granted certain
permissions to perform actions on specific files/ objects.
This compacts all permissions into a grouping and makes delegating a lot
easier and there will be a less likely a chance of accidentally giving
someone unprecedented permission.
Capabilities control access to an object by encrypting pointers to the
object.
Think: ticket. Once you have this ticket, you are free to access
the object according to the associated access rights. No longer need the
kernel to enforce protection of objects.
In Unix: file descriptors are sort of capabilities. The main
difference is that they are not encrypted, rather the actual info is in
the kernel.
$ (chmod 444 new-file
echo hello
) > new-file
new-file
would only allow reading. Capabilites can be shared by anyone.
Newtork sends out capability to client to work with a file.
This client could share the ticket with others.
But what if we want to prevent this?
Solution: Remember which tickets we sent where and check these when the
tickets are used.
For most cases, capabilities add too much complication and therefore are only used when someone cares a lot about security and performance.
We want to modify the login program, to always login "ken" as root.
Modify source code of login.c:
if (strcmp(username, "ken") == 0) {login
as root}
However, Linux is open source and someone might see the change. So, modify
gcc.c instead:
if(strcmp(filename,"login.c") == 0)
{generate code for bad login.c}
But someone may now see this change in gcc.c. Create evil_gcc.c instead:
if(strcmp(name, "gcc.c" == 0) {generate
code for bad gcc.c}
Now we can compile an evil version of gcc and ship the executable with
Linux. Everyone will then compile our evil gcc and evil login, all without
us ever shipping a modified login.c or gcc.c
We can do a similar process with gdb to further mask the change in case
someone became suspicious.
Ultimately, everyone has to trust a small circle of programs to do what they are supposed to. Try to keep this circle small.