Browse Source

Final review post for csci-331

dockerScript
jrtechs 4 years ago
parent
commit
6ca589c7df
16 changed files with 170 additions and 0 deletions
  1. +170
    -0
      blogContent/posts/data-science/csci-331-final-review.md
  2. BIN
      blogContent/posts/data-science/media/final/bellman.png
  3. BIN
      blogContent/posts/data-science/media/final/ccn.PNG
  4. BIN
      blogContent/posts/data-science/media/final/decisionTree.PNG
  5. BIN
      blogContent/posts/data-science/media/final/inductiveLearning.PNG
  6. BIN
      blogContent/posts/data-science/media/final/learningAgent.PNG
  7. BIN
      blogContent/posts/data-science/media/final/logicNeurons.PNG
  8. BIN
      blogContent/posts/data-science/media/final/lstm.PNG
  9. BIN
      blogContent/posts/data-science/media/final/multiLayer.PNG
  10. BIN
      blogContent/posts/data-science/media/final/ock.PNG
  11. BIN
      blogContent/posts/data-science/media/final/pitts.PNG
  12. BIN
      blogContent/posts/data-science/media/final/propLogic.png
  13. BIN
      blogContent/posts/data-science/media/final/propositional.png
  14. BIN
      blogContent/posts/data-science/media/final/singleLayer.PNG
  15. BIN
      blogContent/posts/data-science/media/final/svm.PNG
  16. BIN
      blogContent/posts/data-science/media/final/wumpus.png

+ 170
- 0
blogContent/posts/data-science/csci-331-final-review.md View File

@ -0,0 +1,170 @@
Quick review sheet for Dr. Homan's RIT CSCI-331 final.
# Learning from examples (Ch 18)
- Supervised learning: where you already know the answers
- Re-enforcement learning: Learning with rewards
- Unsupervised: clustering
![](media/final/learningAgent.PNG)
## Inductive learning problems
![](media/final/inductiveLearning.PNG)
![](media/final/ock.PNG)
Ockham's razor: Maximize a combination of consistency and simplicity.
Often times overly complex models that perfectly fit the training data does not generalize well for new data.
## Decision trees
Often the most natural way of representing a boolean problem, but, don't often generalize well.
![](media/final/decisionTree.PNG)
## Entropy
Decision trees use entropy to pick which input to branch on first.
A 50/50 split in data is usually less useful than a 80/20 split in data because the 50/50 split still has more "information" in it.
We pick the input that minimizes entropy.
$$
entropy = \sum^n_{i = 1} -P_i log_2 P_i
$$
## Neural networks
Based on human brains.
McCullon-Pitts
![](media/final/pitts.PNG)
Examples of logic functions:
![](media/final/logicNeurons.PNG)
### Single Layer Perceptrons
![](media/final/singleLayer.PNG)
### Multi-layer Perceptrons
![](media/final/multiLayer.PNG)
## Backpropagation
Way of incrementally adjusting the weights so that the model better fits the training data.
## SVMs: Support Vector Machine
- very high dimensions
- as long as data is sparse, the curse of dimensionality is not an issue
- By default it assumes you can linearly separate the data if you can use a large amount of dimensions. Sometimes you use something called the kernel trick to distort the space to make the data linearly separable.
![](media/final/svm.PNG)
## CNNs: Convolutional neural Networks
![](media/final/ccn.PNG)
## LSTMs: Long short term memory
- Heavily used in natural language processing(NLP).
![](media/final/lstm.PNG)
# Probabilistic Learning (Ch. 20)
## Maximum A Posteriori approximation (MAP)
You assume the model which is most likely and use that to make your prediction.
This is approximately equivalent to the Bayseian formula.
Using the weighted average of the predictions of all the potential models, you make your prediction.
``` python
"""
Equation 20.1
P(h_i|d) = gamma * p(d|h_i)p(h_i)
gamma is 1/P(d) where P(d) is calculated by summing P(h_i|d)
p(d|h_i) is simply the frequency of that bag in the wild times
the sum of the observations times their respective distribution
in the bag.
"""
```
## Maximum Likelihood approximation (MLE)
This process has 3 steps: 1: write down expression for the likelihood of the data as a function of the parameters. 2: Write down the derivatives of the log likelihood with respect to each parameter. 3: Find the parameter values such that the derivatives are zero.
## EM
Used in k-means clustering.
# Reinforcement learning (Ch. 21)
MDP (Markov decision process): Goal is to find an optimal policy.
Often have to explore the space to learn the reward.
## Bellman equation
![](media/final/bellman.png)
# Logic (Ch 7)
- knowledge base = set of sentences in a formal language
- inference engine: domain-independent algorithms
- declarative approach to logic: tell the agent what it needs to know
![](media/final/propositional.png)
- Logics are formal languages for representing information to make conclusions
- syntax defines the sentences in the language
- semantics define the meaning
- A model are formally structured worlds with respect to which truth can be evaluated.
## Propositional Logic
- Assumes world contains facts: models evaluate truth values for propositional symbols.
![](media/final/propLogic.png)
## Entailment
- Entailment means that one thing follows from another.
- KB |= alpha. Knowledge base KB entails sentence "alpha" iff "alpha" is true in all words where KB is true. Ex: x + y = 4 entails 4 = x + y
- AKA: entailment is a relationship between syntax that is based on meaning
![](media/final/wumpus.png)
## Inference
- Inference: Deriving sentences from other sentences
- Soundess: derivations produce only entailed sentences
-Completeness: derivations can produce all entailed sentences
## Forward chaining
Forward chaining will find everything that is true in the logic. As a basic idea, this algorithm checks all rules that are satisfied in the knowledge base and add its conclusion to the knowledge base until the query is found.
## Resolution
Resolution is sound and complete for propositional logic.
## First-order logic (Ch #8)
First-order logic (FOL) like natural languages assumes the world contains objects, relations, functions. Has increased expressiveness power over propositional logic.

BIN
blogContent/posts/data-science/media/final/bellman.png View File

Before After
Width: 1175  |  Height: 553  |  Size: 64 KiB

BIN
blogContent/posts/data-science/media/final/ccn.PNG View File

Before After
Width: 727  |  Height: 305  |  Size: 46 KiB

BIN
blogContent/posts/data-science/media/final/decisionTree.PNG View File

Before After
Width: 694  |  Height: 450  |  Size: 56 KiB

BIN
blogContent/posts/data-science/media/final/inductiveLearning.PNG View File

Before After
Width: 669  |  Height: 469  |  Size: 77 KiB

BIN
blogContent/posts/data-science/media/final/learningAgent.PNG View File

Before After
Width: 717  |  Height: 496  |  Size: 54 KiB

BIN
blogContent/posts/data-science/media/final/logicNeurons.PNG View File

Before After
Width: 671  |  Height: 221  |  Size: 20 KiB

BIN
blogContent/posts/data-science/media/final/lstm.PNG View File

Before After
Width: 597  |  Height: 524  |  Size: 58 KiB

BIN
blogContent/posts/data-science/media/final/multiLayer.PNG View File

Before After
Width: 711  |  Height: 346  |  Size: 44 KiB

BIN
blogContent/posts/data-science/media/final/ock.PNG View File

Before After
Width: 533  |  Height: 396  |  Size: 43 KiB

BIN
blogContent/posts/data-science/media/final/pitts.PNG View File

Before After
Width: 710  |  Height: 490  |  Size: 69 KiB

BIN
blogContent/posts/data-science/media/final/propLogic.png View File

Before After
Width: 1108  |  Height: 402  |  Size: 74 KiB

BIN
blogContent/posts/data-science/media/final/propositional.png View File

Before After
Width: 1176  |  Height: 883  |  Size: 134 KiB

BIN
blogContent/posts/data-science/media/final/singleLayer.PNG View File

Before After
Width: 340  |  Height: 377  |  Size: 29 KiB

BIN
blogContent/posts/data-science/media/final/svm.PNG View File

Before After
Width: 374  |  Height: 341  |  Size: 25 KiB

BIN
blogContent/posts/data-science/media/final/wumpus.png View File

Before After
Width: 965  |  Height: 737  |  Size: 77 KiB

Loading…
Cancel
Save