Home | Neues | Shopping | Bibliothek | Download | Hilfe | Support | |
Inhalt : Technical articles |
Implementing the repetition spacing neural network | Bartosz
Dreger
Piotr Wozniak May 29, 1998 |
See Neural Network SuperMemo for a brief introduction to repetition spacing neural network |
The state of memory will be described with only two variables: retrievability (R) and stability (S) (Wozniak, Gorzelanczyk, Murakowski, 1995). The following equation relates R and S:
(1) R=e-k/S*t
where:
Input and output
The following functions are to be determined by the network:
(2) Si+1=fs(R,Si,D,G)
(3) Di+1=fd(R,S,Di,G)
The neural network is supposed to generate stability (S) and item difficulty (D) on the output given R, S, D and G on the input:
(4) (Ri,Si,Di,Gi) => (Di+1,Si+1)
where:
Target difficulty will be defined as in Algorithm SM-8 as the ratio between second and first intervals. The neural network plug-in (NN.DLL) will record this value for all individual items and use it in training the network:
(5) Do=I2/I1
where:
The initial value of difficulty will
be set to 3.5, i.e. D1=3.5. This is for similarity with Algorithm
SM8 only. As initial difficulty is not known, it cannot be used to determine
the first interval. After scoring the first grade the error correction
is still impossible due to the fact that second optimum interval is not
known. Once it is known, Do can be used for error correction
of D on the output.
To avoid convergence problems in the
network, the following formula will be used to determine the correct output
on D:
(6) Dopt=0.9*Di+0.1*Do
where:
Error correction for stability S
The following formula, derived from Eqn (1) for forgetting index equal 10% and k=1, makes it easy to convert stability and the optimum interval: I=-ln(0.9)*S
In the optimum case the network should generate the requested forgetting index for each repetition. Variable forgetting index can easily be used once the stability S is known (see Eqn (1)). For simplicity then we will use forgetting index equal 10% in further analysis.
To accelerate the convergence, the network will measure forgetting index for 25 classes of repetitions. These classes are set by (1) five difficulty categories: 1-1.5, 1.5-2.5, 2.5-3.5, 3.5-5, and over 5, and (2) five interval categories: 1-5, 5-20, 20-100, 100-500 and over 500 days. We will denote the forgetting index measurements for these categories as FI(Dm,In). Additionally, the overall forgetting index FItot will be measured and used in stability error correction.
The ultimate goal is to reach the forgetting index of 10% in all categories. The following formula will be used in error correction for stability:
(7) FIopt(m,n)=(10*FItot+Cases(m,n)*FI(m,n))/(10+Cases(m,n))
where:
The following table illustrates the assumed relationship between FIopt(m,n), grades and the interval correction applied:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Border conditions
The following additional constraints will be imposed on the neural network to accelerate the convergence:
In the pretraining stage, the following form of Eqns (2) and (3) will be used:
(8) Di+1:=Di+(0.1-(5-G)*(0.08+(5-G)*0.02))
(9) Si+1:=Si*Di*(0.5+1/i)
With D1=3.5 and S1=-3/ln(0.9).
Eqn (8) has been derived from Algorithm
SM-2 (see E-Factor equation).
Eqn (9) has been roughly derived from
Matrix OF in Algorithm SM-8.
D1=3.5 corresponds with
the same setting in Algorithm SM-8.
S1=-3/ln(0.9) corresponds
with the first interval of 3 days and forgetting index 10%. The value of
3 days is close to an average across a wide spectrum of students and difficulty
of the learning material.
Pretraining will also use border conditions
mentioned in the previous paragraph.