good lord theres so much missing in this commit

This commit is contained in:
Medium Fries 2018-10-10 19:42:35 -07:00
parent 6fff1deba0
commit c2cb4ab8d9
9 changed files with 207 additions and 3 deletions

View File

@ -25,5 +25,5 @@ Say we have two competing sessions:
Certain types of applications won't be be rate limited by TCP fairness. Certain types of applications won't be be rate limited by TCP fairness.
Streaming video for instance won't since we just want to _throw_ data across as much as possible. Streaming video for instance won't since we just want to _throw_ data across as much as possible.
That being said we do have to tolerate loss since UDP doesn't account for data loss on the line in any significant manner. This also means we have to account for loss and tolerate it when it does happen because UDP doesn't account for loss anyway.

53
cst311/lec/lec12.md Normal file
View File

@ -0,0 +1,53 @@
# lec12
Networkk Layer
Here instead of datagrams, or segments, we are referring to chunks of data as _datagrams_.
We concern ourselves with _host to host communication_.
There are two major function to worry about:
* forwarding
* getting from an input to an output
* routing
* concerned with
## Virtual Circuits
_THIS SECTION IS BAREBONES_
Datagram Service: network provides network-layer _connectionless_ service.
Virtual Circuit: network provides network-layer _connection_ service.
### Setup a connection
Virtual Connections: before two end hosts start sending data to each other both must determine over which route they will communicate with.
This means that we have to get routers between hosts involved with this initial setup.
When the routers get involved they put an incoming vc number on an incoming bus, and an outgoing vc number on an outgoing bus.
These entries are stored in the routers _forwarding table_.
## VC Implementation
1. Path from source to destination
2. VC numbers one number for each link
3. entries in forwarding table
### Forwarding Table
__pls clarify section__
Each router has a forwarding table which allows for entries regarding data about it's incoming/outgoing buses.
Router has has incoming/outgoing interface on these lines we see that datagram has a vc# when its incoming to the router.
Upon exit we know that each vc# from the incoming interface corresponds to a vc# on some outgoing interface.
## IP Addresses & Datagram forwarding tables
> What is an ip address?
Think of an adress of a variable in memory.
Instead of a variable we have an end host.
Instead of an address memory we have an address in some network.
Usually we'll write a destination address in the header of a datagram so that we know where the data is meant to go.

82
cst337/lec/lec11.md Normal file
View File

@ -0,0 +1,82 @@
# lec11
_diagrams references implied for now_
Sequential Logic: at this point we effectively are dealing with state(_state machines_). Simply put we have _memory_ now.
## State Tables
In Q~s~ is our _Current state_ while Q~s+1~ is the next state
![](../img/lec11fig1.png)
| A | Q~s~ | Q~s+1~ |
|---|---|---|
| 0 | 0 | 0|
| 0 | 1 | 0|
| 1 | 0 | 0|
| 1 | 1 | 1|
We can try the same thing with an `or` gate:
![](../img/lec11fig2.png)
Keeping in mind that our effective input here is only `A`.
## Latches
Namely we are going to look at set-reset latches.
They should be able to do two things:
* store a state
* change state upon appropriately changed signals.
![](../img/lec11fig3.png)
Note that the above state machine the two rows show up as illogical; because both don't make sense in that context.
The actualy gate implementation of the above would look like the above.
![](../img/lec11fig4.png)
The same can also be done with `nor` gates making the whole operation much more efficient on transistor usage.
![](../img/lec11fig5.png)
## Clocking & Frequency
The period of the square wave in this case can be used to find the frequency.
We simple note that `1/T = F`.
This frequency is measured in cycles/second or _hertz_.
![](../img/lec11squareWave.png)
### Setup time & Hold time
Setup time would be some aount of time after the previous point where we wait for the combinational logic to perpetuate its results into memory.
A short period of time in the valley would be setup time
Hold time is the time that we wait before we start feeding input into our combinational logic(unit).
Say we wanted to start our combinational logic at the beginning of one of our plateaus.
## D Latches
_D stands for data_
![](../img/lec11dlatch.png)
Essentially we want to only read in D when the clock signal is high.
If it's low only we want to _block_ the signal from our output state.
The latch simply allows or disallows our input from passing through the other side based on what our clock is(high/low).
If D was 0 then it stays 0 when the clock goes low.
If D was 1 then it stays 1 when the clock goes low.
### Flip-Flop & Edge Triggering
Say we want to grab what ever D is but, only when we approach a falling edge.
The first latch opens as grabs any changes coming off D, then the the second opens just as the first closes.
We can reverse the two like in the next figure to acheive the opposite result: reading on the rising edge.

View File

@ -1,4 +1,4 @@
# Subject - Computer Architecture & Assembly with MIPS \ # Subject - Computer Architecture & Assembly with MIPS
Material on the hardware side of this course covers everything from transistors up to logic gates. Material on the hardware side of this course covers everything from transistors up to logic gates.
Assembly is among the second half of this course(_ymmv_) if the course is flip-flopped for you. Assembly is among the second half of this course(_ymmv_) if the course is flip-flopped for you.

Binary file not shown.

Binary file not shown.

26
cst363/lec/lec11.md Normal file
View File

@ -0,0 +1,26 @@
# lec11
_this section still needs more info_
## Query processing
Keep in mind we are still concerned with systems like sqlite3.
First we have to parse an input to validate it.
Then we should also validate any semantics about the input, ensure that the given _tables, objects etc_ are correct.
Finally we should somehow calculate the input: usually by converting the given expression to the equivalent relational algebra expression.
If we can optimize this expression we can then create more efficient queries.
To do this we take into account 3 main factors:
1. I/O time
* if we have to write something to disk over and over again then we
2. Computational Time
3. Required memory/disk space
### Cost funtion
## Performance of Disk and RAM
## DB Block
## Disk Buffers

43
cst363/lec/lec12.md Normal file
View File

@ -0,0 +1,43 @@
# lec11
## Lab
This section has a lab activity in `lab/` with instructions on `in-memory-searches.pdf` and `on-disk-search.pdf`.
## In-memory Search
_For now we'll deal with trivial queuries._
Say we perform this query: `select name from censusData where age<30;`.
If we do a linear search we will nearly always have to go through all `N` records in the table to get the data we want out.
Binary searches prove to be quicker but our data must be ordered in some fashion.
_Note:_ just recall that we can only sort a table's entries by a single column at any given time.
The other problem we encounter is that our data must _always_ remaini sorted, which means entering, modifying, and deleting data has much larger overhead than other methods.
## On-Disk Search
There are two main ways of storing the data on disk: by record or by column.
Likewise we also have to deal with variable length data types like `varchar` which provides an uppoer bound but no fixed size necessarily.
### Blocks
Blocks contain records or sometimes columns depending on the implementation.
We usually allocate these blocks in 4k or 8k bytes of space since sectors are split into 512 byte chunks.
These things are taken into account because I/O time sucks, it always has and until ssd's lifetime performace doesn't suck this always will.
The main issue with getting data off the disk isn't the read time, it's the time to find something in the first place. This is because we write to the disk in a fashion that _isn't_ completely linear.
Also keep in mind that our total I/O time to search for something is going to be T~access~ + T~transfer~\*N~records~.
* If we search on a keytype then we only have to search half the records.
* Also this is assuming that _all_ the blocks are right next to each other.
If we search for some blocks that happen to be right next to each then we only need to bother finding the first block but with a binary search we have to bother accessing _every single research_.
This is because unlike memory which is managed by a well written OS, the disk is dumb... very dumb.
The way it(physical machine disk) writes/modifies data is nearly always trivial, meaning there is no clever way that it is writing data.
This is half the reason we say that I/O time sucks.
Because hard disks are slow and stupid compared to memory which is quick and clever.

View File

@ -16,4 +16,4 @@ Info here should be useful for general purpose but material is guided for CSUMB'
### Regarding Accuracy ### Regarding Accuracy
Until I am more experienced some of this info might be innaccurate however, I will be keeping some tabs on this repo since it's inevtiable that someone somewhere use this information for their own studies. Until I am more experienced some of this info may be imprecise however, I will be keeping some tabs on this repo since it's inevtiable that someone somewhere use this information for their own studies.