From 81fe2d13cf45f4445ec02bb2265b45c9f23abb12 Mon Sep 17 00:00:00 2001 From: Medium Fries Date: Mon, 15 Oct 2018 22:20:02 -0700 Subject: [PATCH] db: still needs definitions lec13: review --- cst311/lec/lec13.md | 48 +++++++++++++++++++++++++++++++++++++++++++++ cst363/lec/lec12.md | 2 +- cst363/lec/lec13.md | 14 ++++++++++++- 3 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 cst311/lec/lec13.md diff --git a/cst311/lec/lec13.md b/cst311/lec/lec13.md new file mode 100644 index 0000000..803b205 --- /dev/null +++ b/cst311/lec/lec13.md @@ -0,0 +1,48 @@ +# lec13 + +## Datagram Forwarding + +Prefix matching: instead of branching on ranges with a given destination address and our address ranges, we can do a depth-first search to determine where our given address should go. +This means it will fall into one of our address ranges and we can determine which interface to forward that datagram to. + +Essentiall: Instead of branching we'll do a depth first search to see where each datagram must go. +_We search the binary values._ + +## Output ports + +This is more or less the point where packets can be dropped along its route. +The reason why is because if the queue for that router is full and we try to add more stuff in the queue from fabric. +We detect that the queue is full, and our incoming bus is full, so we drop that datat before anything else happens so that we dont mix data arbitrarily. + +Those output ports will each have their own queue buffers. +However inputs can also have their own issues. + +## Input ports + +Just like first we read from a bus off the wire to determine where that data is meant to go. +This process takes time however so sometimes we have to drop datagrams from memory completely if our queue gets too full. + + +## Scheduling + +Now we'll look at which packet to drop. + +### Priority + +Send highest priority packets first. +Say we have to decide about two packest: which goes into queue first? +In this case higher priority goes in first, then the lower. + +### Weighted Fair queueing + +* Generalized Round Robin + +Each class gets weighted amount of service in each cycle. + +## IP Datagram Format + +Let's start with the header data. + +Each datagram is given an ip header which tells other nodes in a network where that datagram is meant to finally end up at. +If a datagram is large however, we might fragment that datagram giving each _chunk_ their own header that tells the other routers where it's supposed to go and if that datagram is part of a larger datagram. +When the pieces finally arrive to their destinatino they can be reassembled using the previous header information. diff --git a/cst363/lec/lec12.md b/cst363/lec/lec12.md index d2ad1fc..37de951 100644 --- a/cst363/lec/lec12.md +++ b/cst363/lec/lec12.md @@ -1,4 +1,4 @@ -# lec11 +# lec12 ## Lab diff --git a/cst363/lec/lec13.md b/cst363/lec/lec13.md index 9b28969..db511a1 100644 --- a/cst363/lec/lec13.md +++ b/cst363/lec/lec13.md @@ -15,4 +15,16 @@ Sorting the indexes allows us to search them _much faster_ than we could ever do Then we simply add a pointer to the index's list of associated pointers. -It's important to note that indexes are tables, just like everything else in sql. +The biggest problem we have with indexing that if have a large number of entries then we would end up storing a huge number of indexes and pointers. +In order to avoid this, we don't take all of the entries. +Instead of taking all entries we take instead every other entry into our index or even every third. +This means that if we have a search that lands us inside one of the gaps we still search in a binary fashion but once we detect that we are we should search a _gap_ we linearly search through that gap. + + +## Clustering + +First let's recall that ideally our data entries in some table are physically located close to each other on disk _and_, are ordered somehow. + +### Dense Clustering + +### Sparser Clustering