Compare commits

..

No commits in common. "218d1c9d6c8c1ec7928dff8979b5fc9be2ee07a7" and "62bcfa79b3ddbbc46977ee96c7b7d3e0608c12d3" have entirely different histories.

15 changed files with 314 additions and 779 deletions

View File

@ -1,10 +1,10 @@
image: pandoc/core
pages:
script:
- ./scripts/build-html.sh
artifacts:
paths:
- public/
only:
- master
script:
- ./scripts/build-html.sh
artifacts:
paths:
-public
only:
master

View File

@ -1,51 +1,81 @@
lec1
=====
# lec10
First we'll define some terminology.
## TCP Structue
> Hosts
Sequence Numbers:
* byte stream _number_ of first byte in segment's data
End systems - typically don't bother with routing data through a network
ACKS's:
* seq # of next byte expected from other side
> Communication Links
Example:
```
host a: user sends 'c'
seq=42, ack=79, data='c'
host b: ACK recepit send to host a(echo's back ''c')
seq=72, ack=49, data='c' ; data sent back from host b
```
Typically the actual systems that connect things together.
### Round trip time
Network edges
-------------
EstimatedRTT= (1-\alpha)*EstimatedRTT + \alpha*SampleRTT
Can be subdivided clients & servers and sometimes both at the same time.
> Lot's of stuff missing here
Access network: cable network
-----------------------------
## TCP Reliable data transfer
Implements:
* Pipeplined segments
* cumulative `ACK`
* This just means that we assume that the highest sequenced ACK also means the previous segments have been received properly too
* Single transmission timer
### Sender Events
1. First create segment w/ seq no.
a. Sequence number refers to byte in
2. Start timer if we don't already have one.
a. Timer based off oldest UN-ACKED segment
## Retransmission w/ TCP
__Timout__: Usually it's pretty long so if there is a timeout on a packet.
When this happens the receiver responds to sender with 3 ACK's for the last well received segment:
Receiver gets `1 2 3 5` but not `4`. We respond with the ACK for `1` like normal, then 3 ACK's for `1` is sent to the sender before the time out and we start re-sending from `2`.
This is what we call _fast retransmit_.
_The main thing here is that the receiver controls the sender's "send rate" so that the receiver doesn't get inundated._
Receiver will _advertise_ free buffer space in including `rwnd` value in TCP header.
This just tells the sender how much space is available to accept at a time.
Example: Transferring a large file from host to host.
\alpha will send a file to \beta.
Alpha sends some file data to \beta, who then ACK's the packet but includes in the header that their buffer is full.
\alpha responds with a 1 byte packet to keep the connection alive.
## Connection Management
Before sender/receiver start exchanging anything we must perform a `handshake`.
`SYN` is a special packet type under TCP which we can use to synchronize both client and server.
### Closing
`FIN` bit inside the header.
We send this off to a receiver and we enter a `close_wait` state.
We only wait because there might be more data.
Receiver enters the `close_wait` state as well, _but_, still sends any data left over.
Once the last `ACK` is sent we send a `FIN` packet
Typically when have to share one line we can change the frequency of the
signal as one method to provide a distinguishment between different data
which may sometimes come from different sources.
### Home Network
Let's start with the modem. All it does it take some signla and convert
it to the proper IEEE data format(citation needed).
Typically we would then pipe that data to a router which, given a
scenario for most houses, would forward that input data to whichever
machines requested the data.
If you recall back to your discrete mathematics coursework various graph
topologies were covered and you likely noted that *star* topologies were
common for businesses since it makes it easist to send data from one
outside node on the star to another. In practice this would just mean
having the router/modem setup be one of the apendages of the star and
switch be in the middle so that the data only has to make two hops to
get anywhere in the network.
> Doesn't that mean theres one node that could bring the whole network
> down at any time?
Absolutely, which is why if you have a *very* small network with a
couple devices it's not really a problem but if you have an office full
of employees all with their own machines and wireless, printers,
servers, etc. then it's a huge problem. That's why typically a small
business or shop might be more inclined to use such a setup because: \*
It's easy to setup \* It's cheap to maintain

View File

@ -1,32 +1,77 @@
Active v Passive Attacks
========================
# Block Ciphers
Base Definitions
----------------
The main concept here is twofold:
Passive: compromising a system but not necessarily doing anything apart
from *watching*
* we take _blocks_ of data and cipher the _blocks_
* A given key is actually used to generate recursive keys to be further used on the data itself
Active: compromising a system while doing something to the system apart
from infiltrating it
Loosely speaking
----------------
_bs example ahead_
*Passive* can be just like listening in on a conversation(eavesdropping)
where *active* is like jumping into the conversation and trying to do
something to it.
Say we have a key 7 and some data 123456.
We take the whole data set and chunk it into blocks(for example): 12 34 56.
When/How would either happen?
-----------------------------
Let's say our function here is to just add 7 to each block so we do the first step:
If the result of an attack is to actually trigger some code to run then
usually we need to first gather the information required to understand
how to make that happen. The reasoning is straightforward: if you don't
know how some system works then it's much harder to exploit that system.
```
12 + 7 = 19
Unlike other ciphers we don't reuse 7; instead we use the new thing as both the new key and part of our cipher text
Random example: Using a keylogger to log keystroke before sending those
logs to a server for processing could be a passive attack since you're
still in a *gathering data* sort of mode. Finally using that data to
trying logging into some service would be the active portion of a
full-scale attack.
19 + 34 = 53
Cipher: 1953..
53 + 56 = 109 <= let's pretend that this rolls over 99 and back to 00
09 <= like this
Final cipher: 195309
```
_It should be noted that in practice these functions usually take in huge keys and blocks_.
> Deciphering
Start from the back of the cipher not the front; if we used and xor function scheme (which is a symmetrical function) we would simply just xor the last block by itself and thus perform the same encryption scheme but in reverse.
Example::Encryption
```
Key: 110
Function scheme: xor
Data: 101 001 111
101 011 010
110 001 111
011 010 101 <= encrypted
```
Example::Decryption
```
Ciphered: 011 010 101
Function scheme: xor
...
```
# Feistal Cipher
Two main components:
1. each _thing_ in the data to cipher is replaced by a _ciphered thing_
2. nothing is added or deleted or replaced in sequence, instead the order of _things_ is changed.
Basically imagine that every _type of thing_ in our data maps to some other _type of thing/thing_ in the data and thus become swapped/reordered.
# DES - Data Encryption Standard
Widely used until about 2001 when AES surpassed it as the newer(ish(kinda)) standard.
DEA was the actual algorithm tho:
* 64 bit blocks
* 56 bit keys
* turns a 64-bit input into a 64-bit output (wew)
* Steps in reverse also reverse the encryption itself

View File

@ -1,169 +1,41 @@
lec1
====
# lec11
> What on earth?
At this point I'l mention that just reading isn't going to get you anywhere, you have to try things, and give it a real earnest attempt.
The first lecture has bee 50% syllabus 25% videos, 25% simple
terminology; expect nothing interesting for this section
__ALU:__ Arithmetic Logic Unit
General Performance Improvements in software
--------------------------------------------
## Building a 1-bit ALU
In general we have a few options to increase performace in software;
pipelining, parallelism, prediction.
![fig0](../img/alu.png)
1. Parallelism
First we'll create an example _ALU_ which implements choosing between an `and`, `or`, `xor`, or `add`.
Whether or not our amazing _ALU_ is useful doesn't matter so we'll go one function at a time(besides `and/or`).
If we have multiple tasks to accomplish or multiple sources of data we
might instead find it better to work on multiple things at
once\[e.g. multi-threading, multi-core rendering\]
First recognize that we need to choose between `and` or `or` against our two inputs A/B.
This means we have two inputs and/or, and we need to select between them.
_Try to do this on your own first!_
2. Pipelining
![fig1](../mg/fig1llec11.png)
Here we are somehow taking *data* and serializing it into a linear form.
We do things like this because it could make sense to things
linearly\[e.g. taking data from a website response and forming it into a
struct/class instance in C++/Java et al.\].
Next we'll add on the `xor`.
Try doing this on your own but as far as hints go: don't be afraid to make changes to the mux.
3. Prediction
![fig2](../img/fig2lec11.png)
If we can predict an outcome to avoid a bunch of computation then it
could be worth to take our prediction and proceed with that instead of
the former. This happens **a lot** in cpu's where they use what's called
[branch prediction](https://danluu.com/branch-prediction/) to run even
faster.
Finally we'll add the ability to add and subtract.
You may have also noted that we can subtract two things to see if they are the same however, we can also `not` the result of the `xor` and get the same result.
Cost of Such Improvements
-------------------------
![fig3](../img/fig3lec11.png)
As the saying goes: every decision you make as an engineer ultimately
has a cost, let's look at the cost of these improvements.
At this point our _ALU_ can `and`, `or`, `xor`, and `add`/`sub`.
The mux will choose one which logic block to use; the carry-in line will tell the `add` logic block whether to add or subtract.
Finally the A-invert and B-invert line allow us to determine if we want to invert either A or B (inputs).
1. Parallelism
## N-bit ALU
If we have a data set which has some form of inter-dependencies between
its members then we could easily run into the issue of waiting on other
things to finish.
For sanity we'll use the following block for our new ALU.
Contrived Example:
![fig4](../img/fig4lec11.png)
Premise: output file contents -> search lines for some text -> sort the resulting lines
We have to do the following processes:
print my-file.data
search file
sort results of the search
In bash we might do: cat my-file.data | grep 'Text to search for' | sort
Parallelism doesn't make sense here for one reason: this series of
proccesses don't benefit from parallelism because the 2nd and 3rd tasks
*must* wait until the previous ones finish first.
2. Pipelining
Let's say we want to do the following:
Search file1 for some text : [search file1]
Feed the results of the search into a sorting program [sort]
Search file2 for some text [search file2]
Feed the results of the search into a reverse sorting program [reverse sort]
The resulting Directed Acyclic Graph looks like
[search file1] => [sort]
[search file2] => [reverse sort]
Making the above linear means we effectively have to:
[search file1] => [sort] [search file2] => [reverse sort]
| proc2 waiting........|
Which wastes a lot of time if the previous process is going to take a
long time. Bonus points if process 2 is extremely short.
3. Prediction
Ok two things up front:
- First: prediction's fault is that we could be wrong and have to end
up doing hard computations.
- Second: *this course never covers branch prediction(something that
pretty much every cpu in the last 20 years out there does)* so I'm
gonna cover it here; ready, let's go.
For starters let's say a basic cpu takes instructions sequentially in
memory: `A B C D`. However this is kinda slow because there is *time*
between getting instructions, decoding it to know what instruction it is
and finally executing it proper. For this reason modern CPU's actually
fetch, decode, and execute(and more!) instructions all at the same time.
Instead of getting instructions like this:
0
AA
BB
CC
DD
We actually do something more like this
A
AB
BC
CD
D0
If it doesn't seem like much remember this is half an instruction on a
chip that is likely going to process thousands/millions of instructions
so the savings scales really well.
This scheme is fine if our instructions are all coming one after the
other in memory, but if we need to branch then we likely need to jump to
a new location like so.
ABCDEFGHIJKL
^^^* ^
|-----|
Now say we have the following code:
if (x == 123) {
main_call();
}
else {
alternate_call();
}
The (psuedo)assembly might look like
``` {.asm}
cmp x, 123
je second
main_branch: ; pointless label but nice for reading
call main_call
jmp end
second:
call alternate_call
end:
; something to do here
```
Our problem comes when we hit the je. Once we've loaded that instruction
and can start executing it, we have to make a decision, load the
`call main_call` instruction or the `call alternate_call`? Chances are
that if we guess we have a 50% change of saving time and 50% chance of
tossing out our guess and starting the whole *get instruction =\> decode
etc.* process over again from scratch.
Solution 1:
Try do determine what branches are taken prior to running the program
and just always guess the more likely branches. If we find that the
above branch calls `main_branch` more often then we should load that
branch always; knowing that the loss from being wrong is offset by the
gain from the statistically more often correct branches.
...
Note that we are chaining the carry-in's to the carry-out's just like a ripple adder.
also each ALU just works with `1` bit from our given 4-bit input.

View File

@ -10,76 +10,29 @@ Most typically we deal with binary(when we do) in nibbles or 4 _bit_ chunks whic
Ex:`0101 1100` is a basic random byte.
For most sane solutions this is essentially the only way we __ever__ deal with binary.
> Why can't we (((save bits))) and not use nibbles?
In truth you can totally do that; but not really.
To explain let's look at some higher level C/C++ code; say you had this structure:
```
struct Point {
int x; // specifying width for clarity sake
int y;
unsigned int valid : 1;
};
```
On a typical x86 system(and many x64 systems) with no compile time optimizations this structure might look like:
```
32(int x) + 32(int y) + 1(unsigned int valid) + 7(bits of padding)
```
Why? Because while we can always calculate the address of a particular byte's address in memory we cant' or rather don't even try to do the same for bits.
The reason is simple: a 32bit CPU can calulate any number inclusively between `0` and `0xffffffff` or `4294967295`. That means we have an entropy pool large enough to have 1 number per byte but not enough to include the bits as well.
If we use that `valid` _bit-field_ in our code later like
```
if(point_ref->valid) {
/* do stuff */
}
```
The machine code instructions generated will really just check if that byte(which contains the bit we care about) is a non-zero value.
If the bit is set we have (for example) `0b0000 0001` thus a _true_ value.
## Two's Complement - aka Negate
To find the Negation of any bit-string:
i.e. `3 * -1=> -3`
1. Flip all bits in the bit-string
2. Add 1 to the bitstring
The case for 3:
```
start off: 0011 => 3
flip bits: 1100 => -2
add one: 1101 => -3
```
### Signedness
> Why?
Because this matters for dealing with `signed` and `unsigned` values. _No it doesn't mean positive and negative numbers._
Say we have 4 bytes to mess with. This means we have a range of 0000 to 1111. If we wanted purely positive numbers in this range we could have 0000 to 1111... or 0 to 15.
If we needed negative representation however, we have to sacrifice some of our range.
Our new unsigned range is then `0-7` _or in binary_: `0000 - 0111`. We say unsigned for this range because the largest number we can represent without setting the first bit is `0111` => `7`.
Our negative range is then `-8 -> -1` which in binary is `0b1000 -> 0b1111`
Say we have 4 bytes to mess with. This means we have a range of 0000 to 1111. If we wanted pureley positive numbers in this range we could have 0000 to 1111... or 0 to 15.
If we needed negative represenation however, we have to sacrifice some of our range.
Our new unsigned range is 0-7. We say it's unsigned because the first bit here is 0.
If it were 1 we would have a _signed_ number.
## Intro to hex
> Hex Notation 0x...
x86 assemblersi(masm) will typically accept `...h` as a postfix notation.
x86 assemblersi(masm) will typically accept `...h`
More convinient than binary for obvious reasons; namely it doesn't look like spaghetti on the screen.
@ -88,29 +41,24 @@ More pedantically our new hex range is 0x00 to 0xff.
> Binary mapped
It happens that 1 nibble makes up 0x0 to 0xF.
So for now just get used to converting {0000-1111} to one of it's respective values in hex and eventually it should be second nature.
Then just move on to using hex(like immediately after these lessons), because writing actual binary is actually awful.
> Dude trust me hex is way better to read than decimal
It may seem convenient at first but after a while you'll realized that hex has really easy to understand uses and makes this super clear + concise, especially when dealing with bit masks and bitsets.
It happens that 1 nibble makes up 0x00 to 0xFF.
So for now just get used to converting {0000-1111} to one of it's respective values in hex and evetually it should be second nature.
Then just move on to using hex(like immediately after these lessons).
Even the most obfuscated binary files out there don't resort to architectural obfuscation; until they do.
> Ascii in Hex Dumps
Kind of a side note but most ascii text values range from 0x21 to 0x66 so if you're looking for text in a binary look for groupings of that value.
Kind of a side note but most ascii text is from 0x21 to 0x66ish[citation needed]
## 32 v 64 bit
In case you come from an x86_64 ish background know that in MIPS terminology changes a bit(bun intended).
For those with a 32 bit background know that these notes deal with 64-bit architecutres mostly. So some quick terminology which might randomly throw you off anyway.
> x86 byte = mips byte
> double-byte/ half-word
> x86 word = mips half word
The latter is dumb but soemtimes used so wtever.
> x86 dword = mips word
> word = 4 bytes
> x86/64 qword = mips mips dword
Etc onward with doubles, quads...
So just keep those translations in mind...

View File

@ -1,19 +1,22 @@
# Lecture 3
# lec3
## One's & Two's Complement (in depth(or something))
## One's & Two's Complement
In order to change recall from last lecture that we wanted to represent `3` with a single nibble like so `0b0011`.
_Previous lecture went over signedness of numbers so this section won't as much_.
To make this into a `-3` we:
First we'll deal with flipping bits: this is where you may hear the term _1's complement_.
While not very useful on it's own for most purposes it does help get closer to creating a seperation between _positive_ and _negative_ numbers.
1. Flipped all the bits : `value xor 0xff..`
The only other step after flipping all the bits is just adding 1.
2. Added 1 to the result of step 1
`1001 1110` becomes `0110 0010`.
> Ok, but like, why do I care? we're just multiplying things by -1 how does that matter at all?
> shouldn't that last 2 bits be 01?
It matters because certain types operations _just suck_ on pretty much every general use platform.
Close, the reason why we have `b10` is because if we: `b01 + b1` the `1` will carry over to the next bit.
The actual term for this is just __negate__; the other way around is essentially cannon fodder.
>Ok, but what does that look like _assembly_ the thing I came here to learn.
Most assemblers accept something like `neg targetValue` however you can also use an _exclusive or_[`xor targetValue, 0xFF`]. Keep in mind that the immediate value should be sign-extended to reflect the proper targetValue size.

View File

@ -1,47 +1,43 @@
lec1
====
# lec10
Databases introduction
----------------------
This lecture has a corresponding lab excercise who's instructions can be found in `triggers-lab.pdf`.
First off why do we even need a database and what do they accomplish?
## What is a trigger
Generally a databse will have 3 core elements to it:
Something that executes when _some operation_ is performed
1. querying
- Finding things
- Just as well structured data makes querying easier
2. access control
- who can access which data segments and what they can do with
that data
- reading, writing, sending, etc
3. corruption prevention
- mirroring/raid/parity checking/checksums/etc as some examples
## Structure
Modeling Data
-------------
```
create trigger NAME before some_operation
when(condition)
begin
do_something
end;
```
Just like other data problems we can choose what model we use to deal
with data. In the case for sqlite3 the main data model we have are
tables, where we store our pertinent data, and later we'll learn even
data about our data is stored in tables.
To explain: First we `create trigger` followed by some trigger name.
Then we have to denote that this trigger should fire whenever some operation happens.
This trigger then executes everything in the `begin...end;` section _before_ the new operation happens.
Because everything goes into a table, it means we also have to have a
plan for *how* we want to lay out our data in the table. The **schema**
is that design/structure for our databse. The **instance** is the
occurance of that schema with some data inside the fields, i.e. we have
a table sitting somewhere in the databse which follows the given
structure of a aforemention schema.
> `after`
**Queries** are typically known to be declarative; typically we don't
care about what goes on behind the scenes in practice since by this
point we are assuming we have tools we trust and know to be somewhat
efficient.
Likewise if we want to fire a trigger _after_ some operation we ccan just replace the before keyword with `after`.
> `new.adsf`
Refers to _new_ value being added to a table.
> `old.adsf`
Refers to _old_ vvalue being changed in a table.
## Trigger Metadata
If you want to look at what triggers exist you can query the `sql_master` table.
```
select * from sql_master where name='trigger';
```
Finally we have **transactions** which are a set of operations who are
not designed to only commit if they are completed successfully.
Transactions are not alllowed to fail. If *anything* fails then
everything should be undone and the state should revert to previous
state. This is useful because if we are, for example, transferring money
to another account we want to make sure that the exchange happens
seamlessly otherwise we should back out of the operation altogether.

View File

@ -88,7 +88,7 @@ if __name__ == "__main__":
# build up our heap to display info from
heap = encode(frequencies)[0]
print(heap)
#print(heap)
# decode the binary
decode(heap, binary)

View File

@ -1,35 +1,38 @@
A\* Pathfinding
===============
# Adjacency list
There are 3 main values usedd in reference to A\*:
Imagine 8 nodes with no connections
f = how promisiing a new location is
g = distance from origin
h = estimate distance to goal
f = g + h
To store this data in an _adjacency list_ we need __n__ items to store them.
We'll have 0 __e__dges however so in total our space is (n+e) == (n)
For a grid space our `h` is calculated by two straight shots to the goal
from the current location(ignore barriers). The grid space `g` value is
basiccally the number of steps we've taken from the origin. We maintain
a list of potential nodes only, so if one of the seeking nodes gets us
stuck we can freely remove that, because it succs.
# Adjacency matrix
Time & Space Commplexities
==========================
space: O(n^2)
The convention for notation btw is [x,y] meaning:
* _from x to y_
Best-First Search
-----------------
# Breadth first search
Time: O(VlogV + E)
add neighbors of current to queue
go through current's neighbors and add their neighbors to queue
add neighbor's neighbors
keep going until there are no more neighbors to add
go through queue and start popping members out of the queue
Dijkstra's
----------
# Depth first search
O(V\^2 + E)
Here we're going deeper into the neighbors
A\*
---
_once we have a starting point_
Worst case is the same as Dijkstra's time
_available just means that node has a non-visited neighbor_
if available go to a neighbor
if no neighbors available visit
goto 1
O(V\^2 + E)
# Kahn Sort
# Graph Coloring
When figuring out how many colors we need for the graph, we should note the degree of the graph

View File

@ -1,60 +1,69 @@
Data storage
============
# Hardware deployment Strategies
Spinning Disks
--------------
Cheaper for more storage
## Virtual Desktop Interface
RAID - Redundant Array of Independent Disk
------------------------------------------
aka 0-Clients: network hosted OS is what each client would use.
Raid 0: basically cramming multiple drives and treating them as one.
Data is striped across the drives but if one fails then you literally
lose a chunk of data.
In some cases that network is a pool of servers which are tapped into.
Clients can vary in specs like explained below(context: university):
Raid 1: data is mirrored across the drives so it's completely redundant
so if one fails the other is still alive. It's not a backup however
since file updates will affect all the drives.
> Pool for a Library
Raid 5: parity. Combining multiple drives allows us to establish the
parity of the data on other drives to recover that data if it goes
missing.(min 3 drives)
Clients retain low hardware specs since most are just using office applications and not much else.
Raid 6: same in principle as raid 5 but this time we have an extra drive
for just parity.
> Pool for an Engineering department
Raid 10: 0 and 1 combined to have a set of drives in raid 0 and putting
those together in raid 1 with another equally sized set of drives.
Clients connect to another pool where both clients and pool have better hardware specs/resources.
Network Attached Storage - NAS
------------------------------
The downside is that there is _1 point of failure_.
The pool goes down and so does everyone else, meaning downtime is going to cost way more than a single machine going down.
Basically space stored on the local network.
Storage Attached Network - SAN
------------------------------
Applicable when we virtualise whole os's for users, we use a storage
device attached to the network to use different operating systems
# Server Hardware Strategies
Managing Storage
================
> All eggs in one basket
Outsourcing the storage for users to services like Onedrive because it
becomes their problem and not ours.
Imagine just one server doing everything
Storage as a Service
====================
* Important to maintain redundancy in this case
* Upgrading is a pain sometimes
Ensure that the OS gets its own space/partition on a drive and give the
user their own partition to ruin. That way the OS(windows) will just
fill its partition into another dimension.
Backup
======
> Buy in bulk, allocate fractions
Other people's data is in your hands so make sure that you backup data
in some way. Some external services can be nice if you find that you
constantly need to get to your backups. Tape records are good for
archival purposes; keep in mind that they are slow as hell.
Basically have a server that serves up varies virtual machines.
# Live migration
Allows us to move live running virtual machines onto new servers if that server is running out of resources.
# Containers
_docker_: Virtualize the service, not the whole operating system
# Server Hardware Features
> Things that server's benefit from
* fast i/o
* low latency cpu's(xeons > i series)
* expansion slots
* lots of network ports available
* EC memory
* Remote control
Patch/Version control on server's
Scheduling is usually slow/more lax so that server's don't just randomly break all the time.
# Misc
Uptime: more uptime is _going_ to be more expensive. Depending on what you're doing figure out how much downtime you can afford.
# Specs
Like before _ecc memory_ is basically required for servers, good number of network interfaces, and solid disks management.
Remember that the main parameters for choosing hardware is going to be budget, and necessity; basically what can you get away with on the budget at hand.

View File

@ -1,23 +0,0 @@
# Alejandro's Notes
Here you will find all the notes in reference book format below.
If some of this information is inaccurate or missing details please feel free to submit a merge request or contact me via Email/Discord:
* Email: alejandros714@protonmail.com
* Discord: shockrah#2647
* Public Repository: [gitlab.com/shockrah/csnotes](https://gitlab.com/shockrah/csnotes/)
[Intro to Networking](intro-to-networking-311.html)
[Networking Administration](network-administration-412.html)
[Networking and Security Concepts](network-security-concepts-312.html)
[Intro to Databases](intro-to-databases-363.html)
[Advanced Algorithms](advanced-algorithms-370.html)
[Computer Architecture with MIPS](computer-architecture-337.html)

View File

@ -1,328 +0,0 @@
/*
* I add this to html files generated with pandoc.
*/
html {
font-size: 100%;
overflow-y: scroll;
-webkit-text-size-adjust: 100%;
-ms-text-size-adjust: 100%;
}
body {
color: #444;
font-family: Georgia, Palatino, 'Palatino Linotype', Times, 'Times New Roman', serif;
font-size: 12px;
line-height: 1.7;
padding: 1em;
margin: auto;
max-width: 42em;
background: #fefefe;
}
a {
color: #0645ad;
text-decoration: none;
}
a:visited {
color: #0b0080;
}
a:hover {
color: #06e;
}
a:active {
color: #faa700;
}
a:focus {
outline: thin dotted;
}
*::-moz-selection {
background: rgba(255, 255, 0, 0.3);
color: #000;
}
*::selection {
background: rgba(255, 255, 0, 0.3);
color: #000;
}
a::-moz-selection {
background: rgba(255, 255, 0, 0.3);
color: #0645ad;
}
a::selection {
background: rgba(255, 255, 0, 0.3);
color: #0645ad;
}
p {
margin: 1em 0;
}
img {
max-width: 100%;
}
h1, h2, h3, h4, h5, h6 {
color: #111;
line-height: 125%;
margin-top: 2em;
font-weight: normal;
}
h4, h5, h6 {
font-weight: bold;
}
h1 {
font-size: 2.5em;
}
h2 {
font-size: 2em;
}
h3 {
font-size: 1.5em;
}
h4 {
font-size: 1.2em;
}
h5 {
font-size: 1em;
}
h6 {
font-size: 0.9em;
}
blockquote {
color: #666666;
margin: 0;
padding-left: 3em;
border-left: 0.5em #EEE solid;
}
hr {
display: block;
height: 2px;
border: 0;
border-top: 1px solid #aaa;
border-bottom: 1px solid #eee;
margin: 1em 0;
padding: 0;
}
pre, code, kbd, samp {
color: #000;
font-family: monospace, monospace;
_font-family: 'courier new', monospace;
font-size: 0.98em;
}
pre {
white-space: pre;
white-space: pre-wrap;
word-wrap: break-word;
}
b, strong {
font-weight: bold;
}
dfn {
font-style: italic;
}
ins {
background: #ff9;
color: #000;
text-decoration: none;
}
mark {
background: #ff0;
color: #000;
font-style: italic;
font-weight: bold;
}
sub, sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sup {
top: -0.5em;
}
sub {
bottom: -0.25em;
}
ul, ol {
margin: 1em 0;
padding: 0 0 0 2em;
}
li p:last-child {
margin-bottom: 0;
}
ul ul, ol ol {
margin: .3em 0;
}
dl {
margin-bottom: 1em;
}
dt {
font-weight: bold;
margin-bottom: .8em;
}
dd {
margin: 0 0 .8em 2em;
}
dd:last-child {
margin-bottom: 0;
}
img {
border: 0;
-ms-interpolation-mode: bicubic;
vertical-align: middle;
}
figure {
display: block;
text-align: center;
margin: 1em 0;
}
figure img {
border: none;
margin: 0 auto;
}
figcaption {
font-size: 0.8em;
font-style: italic;
margin: 0 0 .8em;
}
table {
margin-bottom: 2em;
border-bottom: 1px solid #ddd;
border-right: 1px solid #ddd;
border-spacing: 0;
border-collapse: collapse;
}
table th {
padding: .2em 1em;
background-color: #eee;
border-top: 1px solid #ddd;
border-left: 1px solid #ddd;
}
table td {
padding: .2em 1em;
border-top: 1px solid #ddd;
border-left: 1px solid #ddd;
vertical-align: top;
}
.author {
font-size: 1.2em;
text-align: center;
}
@media only screen and (min-width: 480px) {
body {
font-size: 14px;
}
}
@media only screen and (min-width: 768px) {
body {
font-size: 16px;
}
}
@media print {
* {
background: transparent !important;
color: black !important;
filter: none !important;
-ms-filter: none !important;
}
body {
font-size: 12pt;
max-width: 100%;
}
a, a:visited {
text-decoration: underline;
}
hr {
height: 1px;
border: 0;
border-bottom: 1px solid black;
}
a[href]:after {
content: " (" attr(href) ")";
}
abbr[title]:after {
content: " (" attr(title) ")";
}
.ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after {
content: "";
}
pre, blockquote {
border: 1px solid #999;
padding-right: 1em;
page-break-inside: avoid;
}
tr, img {
page-break-inside: avoid;
}
img {
max-width: 100% !important;
}
@page :left {
margin: 15mm 20mm 15mm 10mm;
}
@page :right {
margin: 15mm 10mm 15mm 20mm;
}
p, h2, h3 {
orphans: 3;
widows: 3;
}
h2, h3 {
page-break-after: avoid;
}
}

View File

@ -1,12 +1,3 @@
# Holy Moly
These notes are ancient but I like keeping them around because it reminds of
my college days when I didn't really know much :3
Not sure who will find value from these but here's some random tidbits of knowledge
# Everyone else
To some degree these notes are personal so there are a few mistakes that I just can't be bothered dealing with.

24
scripts/build-html.sh Executable file → Normal file
View File

@ -1,18 +1,10 @@
mkdir -p public/img
cp gitlab-page/style.css public/style.css
#!/bin/sh
md() {
pandoc -s --css style.css `ls -v $1`
}
# Locations of important md files to build
md "311/lec/*.md" > public/intro-to-networking-311.html
md "312/*.md" > public/network-security-concepts-312.html
md "337/lec/*.md" > public/computer-architecture-337.html
cp 337/img/* public/img/
md "363/lec/*.md" > public/intro-to-databases-363.html
md "370/notes/*.md" > public/advanced-algorithms-370.html
md "412/*.md" > public/network-administration-412.html
md gitlab-page/index.md > public/index.html
lecture_dirs='311/lec/ 312/ 337/lec/ 363/lec/ 370/notes/ 412/'
mkdir -p public
for d in $lecture_dirs;do
echo $d;
pandoc `ls --sort=version $d` -o "public/$d.html"
done

View File

@ -1,3 +0,0 @@
#!/bin/sh
cd public
python -m SimpleHTTPServer