Starting from:
$35

$29

Homework #5 Solution

This homework will be graded out of 100 points. It will count 5% of the grade. It is optional, the credit received will be considered extra credit.

 
[30 points] Consider the following variation to the block stacking domain; blocks are arranged on a table in N^2 locations corresponding to the cells of a N x N grid. At most one block can be located in any given location and no more than H blocks can be stacked.

 
Define action schemas for moving blocks between locations, between blocks, and between locations and blocks. Define the predicates you use and make sure the notation you use for writing the schemas is clearly defined.

 
Assume the initial state and goal state are completely specified (i.e. you know where each block is), and assume the number of blocks is less or equal to N^2. You do not need to answer this part. These are the assumptions you should make, which might be important for your algorithm

 
Propose a trivial algorithm (not a planning algorithm) for solving any problem in this domain.

 
Is your algorithm guaranteed to find an optimal solution in terms of number of steps?

 
Discuss briefly the advantages, if any, of using a planning system compared to your solution.


 
[20 points]

 
How does the ``closed world assumption'' affect planning? Be precise.

 
Why preconditions in action schemas are conjunctions and not disjunctions?

 
Why variables that appear in the effects of an action schema have to be in the preconditions?

 
Why an initial state for planning needs to have ground atoms (no variables)?


 
[20 points] This question is about a feed forward neural network with no hidden nodes.
Use the code posted at https://iamtrask.github.io/2015/07/12/basic-python-network/ The code is very simple and clean. The article describes it in detail.

 
[5 points] Download the code NN1.py and run it. The program will print something like:


Output After Training:
[[ 0.03178421]
[ 0.02576499]
[ 0.97906682]
[ 0.97414645]]
which shows output values for each input close to y.

 
[5 points] To see what happened, print the final weights that the network has learned.

 
[10 points] Given new input data, use the learned weigths to classify the new data. Try as new input [1,2,1], [0,4,1], [5,2,1], and [-2,4,1]. To see the output of the network when new data are presented you need to multiply the data by the weigths in the network and pass the result through the nonlin function, as in k0=nonlin(np.array([1,2,1]).dot(syn0))
You can classify the new data one by one or all together.

 
[not required] If you are curious, you could plot the error as it changes over the training. This requires keeping track of the error during the training loop.
To plot in python, do

import matplotlib.pyplot as plt
and use the functions plt.plot and plt.show to do the plotting.

 
[30 points] Feed forward neural network with hidden nodes.

More products