db: still needs definitions lec13: review

This commit is contained in:
Medium Fries 2018-10-15 22:20:02 -07:00
parent 751c230896
commit 81fe2d13cf
3 changed files with 62 additions and 2 deletions

48
cst311/lec/lec13.md Normal file
View File

@ -0,0 +1,48 @@
# lec13
## Datagram Forwarding
Prefix matching: instead of branching on ranges with a given destination address and our address ranges, we can do a depth-first search to determine where our given address should go.
This means it will fall into one of our address ranges and we can determine which interface to forward that datagram to.
Essentiall: Instead of branching we'll do a depth first search to see where each datagram must go.
_We search the binary values._
## Output ports
This is more or less the point where packets can be dropped along its route.
The reason why is because if the queue for that router is full and we try to add more stuff in the queue from fabric.
We detect that the queue is full, and our incoming bus is full, so we drop that datat before anything else happens so that we dont mix data arbitrarily.
Those output ports will each have their own queue buffers.
However inputs can also have their own issues.
## Input ports
Just like first we read from a bus off the wire to determine where that data is meant to go.
This process takes time however so sometimes we have to drop datagrams from memory completely if our queue gets too full.
## Scheduling
Now we'll look at which packet to drop.
### Priority
Send highest priority packets first.
Say we have to decide about two packest: which goes into queue first?
In this case higher priority goes in first, then the lower.
### Weighted Fair queueing
* Generalized Round Robin
Each class gets weighted amount of service in each cycle.
## IP Datagram Format
Let's start with the header data.
Each datagram is given an ip header which tells other nodes in a network where that datagram is meant to finally end up at.
If a datagram is large however, we might fragment that datagram giving each _chunk_ their own header that tells the other routers where it's supposed to go and if that datagram is part of a larger datagram.
When the pieces finally arrive to their destinatino they can be reassembled using the previous header information.

View File

@ -1,4 +1,4 @@
# lec11
# lec12
## Lab

View File

@ -15,4 +15,16 @@ Sorting the indexes allows us to search them _much faster_ than we could ever do
Then we simply add a pointer to the index's list of associated pointers.
It's important to note that indexes are tables, just like everything else in sql.
The biggest problem we have with indexing that if have a large number of entries then we would end up storing a huge number of indexes and pointers.
In order to avoid this, we don't take all of the entries.
Instead of taking all entries we take instead every other entry into our index or even every third.
This means that if we have a search that lands us inside one of the gaps we still search in a binary fashion but once we detect that we are we should search a _gap_ we linearly search through that gap.
## Clustering
First let's recall that ideally our data entries in some table are physically located close to each other on disk _and_, are ordered somehow.
### Dense Clustering
### Sparser Clustering