Merge branch 'master' of gitlab.com:shockrah/csnotes

This commit is contained in:
shockrah 2019-09-29 13:47:05 -07:00
commit bc44a1781f
112 changed files with 275 additions and 34 deletions

View File

@ -0,0 +1,38 @@
# Asymmetric Key Encryption(Public/Private)
Think of a box that we put things inside of:(put simply)
* Private key: can open the box
* Public key: can lock the box
The idea works on the principle that public keys are public so anyone can lock a message down but only the owner/creator of that public key can open those locked messages with their private key.
## Public Keys
Can be used to open something if it was locked with a private key.
## Private Keys
If used to lock something the public key can be used to then open the box.
_The catch_: that message is also signed so we know exactly who the message is coming from.
## Both together
> Message => Lock(message, private key)
_Sign_ the message
> Signed Message => Lock(signed message, public key)
Lock the message like normal
Once the intended person has the package they:
* Open it with their private key
* Check the signature
* Find the public key for that signature
* Open the remaining layer with the public key
That last part only works because locking with a private key allows the public key to open the box afterwards.

View File

@ -1,16 +0,0 @@
# Asymmetric Key Encryption(Public/Private)
Think of a box that we put things inside of:(put simply)
* Private key: can open the box
* Public key: can lock the box
Caveats:
Public keys contain a unique signature, which can be used to _sign_ a message. Even though everyone can open the message they also know who locked the box.
Imagine then, lock the box with private key(secure) and sign it with the public key(authorized).

40
312/notes/rsa.md Normal file
View File

@ -0,0 +1,40 @@
# Procedure
Example using 3 values:
* p = 3
* q = 17
* e = 15
* m = 3
There are a few components which must be calculated before we can safely determine a cipher text:
`n = p * q` : note that `p` and `q` values should be primes in this case.
`O(n) = (p - 1) * (q - 1)` is used later to verify that we have a value `d` which is the inverse of `e`. _We call this the quotient function_.
## Encryption
To produce a cipher text `C` we take `m` and raise it to the power of `e`(from earlier) then take the modulor of it by `n`:
```
C = (m^e) % n
```
`m` is the desired message to encrypt.
The public and private keys are using the above cipher text functions whose unknown parameters are passed as follows
`PublicKey(e, n)`
`PrivateKey(d, n)`
## Decryption
The reverse of this is the following:
```
M = (c^d) % n
```

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 4.4 KiB

After

Width:  |  Height:  |  Size: 4.4 KiB

155
337/lec/lec1.md Normal file
View File

@ -0,0 +1,155 @@
# lec1
> What on earth?
The first lecture has bee 50% syllabus 25% videos, 25% simple terminology; expect nothing interesting for this section
## General Performance Improvements in software
In general we have a few options to increase performace in software; pipelining, parallelism, prediction.
1. Parallelism
If we have multiple tasks to accomplish or multiple sources of data we might instead find it better to work on multiple things at once[e.g. multi-threading, multi-core rendering]
2. Pipelining
Here we are somehow taking _data_ and serializing it into a linear form.
We do things like this because it could make sense to things linearly[e.g. taking data from a website response and forming it into a struct/class instance in C++/Java et al.].
3. Prediction
If we can predict an outcome to avoid a bunch of computation then it could be worth to take our prediction and proceed with that instead of the former.
This happens **a lot** in cpu's where they use what's called [branch prediction](https://danluu.com/branch-prediction/) to run even faster.
## Cost of Such Improvements
As the saying goes: every decision you make as an engineer ultimately has a cost, let's look at the cost of these improvements.
1. Parallelism
If we have a data set which has some form of inter-dependencies between its members then we could easily run into the issue of waiting on other things to finish.
Contrived Example:
```
Premise: output file contents -> search lines for some text -> sort the resulting lines
We have to do the following processes:
print my-file.data
search file
sort results of the search
In bash we might do: cat my-file.data | grep 'Text to search for' | sort
```
Parallelism doesn't make sense here for one reason: this series of proccesses don't benefit from parallelism because the 2nd and 3rd tasks _must_ wait until the previous ones finish first.
2. Pipelining
Let's say we want to do the following:
```
Search file1 for some text : [search file1]
Feed the results of the search into a sorting program [sort]
Search file2 for some text [search file2]
Feed the results of the search into a reverse sorting program [reverse sort]
The resulting Directed Acyclic Graph looks like
[search file1] => [sort]
[search file2] => [reverse sort]
```
Making the above linear means we effectively have to:
```
[search file1] => [sort] [search file2] => [reverse sort]
| proc2 waiting........|
```
Which wastes a lot of time if the previous process is going to take a long time.
Bonus points if process 2 is extremely short.
3. Prediction
Ok two things up front:
* First: prediction's fault is that we could be wrong and have to end up doing hard computations.
* Second: _this course never covers branch prediction(something that pretty much every cpu in the last 20 years out there does)_ so I'm gonna cover it here; ready, let's go.
For starters let's say a basic cpu takes instructions sequentially in memory: `A B C D`.
However this is kinda slow because there is _time_ between getting instructions, decoding it to know what instruction it is and finally executing it proper.
For this reason modern CPU's actually fetch, decode, and execute(and more!) instructions all at the same time.
Instead of getting instructions like this:
```
0
AA
BB
CC
DD
```
We actually do something more like this
```
A
AB
BC
CD
D0
```
If it doesn't seem like much remember this is half an instruction on a chip that is likely going to process thousands/millions of instructions so the savings scales really well.
This scheme is fine if our instructions are all coming one after the other in memory, but if we need to branch then we likely need to jump to a new location like so.
```
ABCDEFGHIJKL
^^^* ^
|-----|
```
Now say we have the following code:
```
if (x == 123) {
main_call();
}
else {
alternate_call();
}
```
The (psuedo)assembly might look like
```asm
cmp x, 123
je second
main_branch: ; pointless label but nice for reading
call main_call
jmp end
second:
call alternate_call
end:
; something to do here
```
Our problem comes when we hit the je.
Once we've loaded that instruction and can start executing it, we have to make a decision, load the `call main_call` instruction or the `call alternate_call`?
Chances are that if we guess we have a 50% change of saving time and 50% chance of tossing out our guess and starting the whole _get instruction => decode etc._ process over again from scratch.
Solution 1:
Try do determine what branches are taken prior to running the program and just always guess the more likely branches.
If we find that the above branch calls `main_branch` more often then we should load that branch always; knowing that the loss from being wrong is offset by the gain from the statistically more often correct branches.
...

View File

@ -18,12 +18,12 @@ _Try to do this on your own first!_
![fig1](../mg/fig1llec11.png) ![fig1](../mg/fig1llec11.png)
Next we'll add on the `xor`. Next we'll add on the `xor`.
AGAIN: try to do this on your own, the main hint I'll give here is: the current mux needs to be changed. Try doing this on your own but as far as hints go: don't be afraid to make changes to the mux.
![fig2](../img/fig2lec11.png) ![fig2](../img/fig2lec11.png)
Finally we'll add the ability to add and subtract. Finally we'll add the ability to add and subtract.
You may have also noted that we can subtract two things to see if they are the same dhowever, we can also `not` the result of the `xor` and get the same result. You may have also noted that we can subtract two things to see if they are the same however, we can also `not` the result of the `xor` and get the same result.
![fig3](../img/fig3lec11.png) ![fig3](../img/fig3lec11.png)

Some files were not shown because too many files have changed in this diff Show More