Cerebral Cortex, 11(12):1144–9, 2001. Dendritic Inhibition Enhances Neural Coding Properties M. W. Spratling and M. H. Johnson Centre for Brain and Cognitive Development, Birkbeck College, London. UK. Abstract
The presence of a large number of inhibitory contacts at the soma and axon initial segment of cortical pyra-
midal cells has inspired a large and influential class of neural network model which use post-integration lateralinhibition as a mechanism for competition between nodes. However, inhibitory synapses also target the dendritesof pyramidal cells. The role of this dendritic inhibition in competition between neurons has not previously beenaddressed. We demonstrate, using a simple computational model, that such pre-integration lateral inhibition pro-vides networks of neurons with useful representational and computational properties which are not provided bypost-integration inhibition. Introduction
Lateral inhibition between cortical excitatory cells plays an important role in determining the receptive field prop-erties of those cells. Such lateral inhibition provides a mechanism through which cells compete to respond tothe current pattern of stimulation. Inhibitory inputs are concentrated on the soma and axon initial segment ofpyramidal cells where they can be equally effective at inhibitingresponses to excitatory inputs stimulating any part of the dendritic tree.
This observation has formed the basis for many theories of receptive field formation, and is an essential feature
of many computational (neural network) models of cortical function Such neural network algorithms have also found application beyond the neurosciences as ameans of data analysis, classification and visualization in a huge variety of fields. These algorithms vary greatlyin the details of their implementation. In some, competition is achieved explicitly by using lateral connectionsbetween the nodes of the network while in others competition isimplemented implicitly through a selection process which chooses the ‘winning’ node(s) However, in all of these algorithms nodes compete for the right to generate a response to the current pattern ofinput activity. A node’s success in this competition is dependent on the total strength of the stimulation it receivesand nodes which compete unsuccessfully have their output activity suppressed. This class of models can thus bedescribed as implementing ‘post-integration inhibition’.
Inhibitory contacts also occur on the dendrites of cortical pyramidal cells
and certain classes of interneuron (e.g., double bouquet cells) specifically target dendritic spines and shaftsSuch contacts would have relatively little impact on excitatory inputsmore proximal to the cell body or on the action of synapses on other branches of the dendritic tree. Thus thesesynapses do not appear to contribute to post-integration inhibition. However, such synapses are likely to havestrong inhibitory effects on inputs within the same dendritic branch that are more distal to the site of inhibi-tion Hence, theycould potentially selectively inhibit specific groups of excitatory inputs. Related synapses cluster together withinthe dendritic tree so that local operations are performed by multiple, functionally distinct, dendritic subunits be-fore integration at the soma Dendritic inhibition could thus act to ‘block’ the output from individual func-tional compartments. It has long been recognized that a dendrite composed of multiple subunits would providea significant enhancement to the computational powers of an individual neuron and thatdendritic inhibition could contribute to this enhancement However, the role of dendritic inhibition in competition between cells and its subsequent effect onneural coding and receptive field properties has not previously been investigated.
We introduce a neural network model which demonstrates that competition via dendritic inhibition signifi-
cantly enhances the computational properties of networks of neurons. As with models of post-integration inhi-bition we simplify reality by combining the action of inhibitory interneurons into direct inhibitory connections
Figure 1: A network competing through pre-integration lateral inhibition. Nodes are shown as large circles, excitatory synapses as small open circles and inhibitory synapses as small filled circles.
between nodes. Furthermore, we group all the synapses contributing to a dendritic compartment together as asingle input. Dendritic inhibition is then modeled as (linear) inhibition of this input. The algorithm is describedfully in the Methods section, but essentially, it operates by causing each node to attempt to ‘block’ its preferredinputs from activating other nodes. It is thus described as ‘pre-integration inhibition’.
We illustrate the advantages of this form of competition with the aid of a few simple tasks which have been
used previously to demonstrate the pattern recognition abilities required by models of the human perceptual sys-tem Although these tasks appear to be trivial, suc-ceeding in all of them is beyond the abilities of single-layer neural networks using post-integration inhibition. These tasks demonstrate that pre-integration inhibition (in contrast to post-integration inhibition) enables a neuralnetwork to respond simultaneously to multiple stimuli, to distinguish overlapping stimuli, and to deal correctlywith incomplete and ambiguous stimuli.
A simple, two-node, neural network in which there is pre-integration inhibition is shown in figure The essentialidea is that each node inhibits other nodes from responding to the same inputs. Hence, if a node is active and ithas a strong synaptic weight to a certain input then it should inhibit other nodes from responding to that input. Asimple implementation of this idea for a two-node network would be:
Where yj is the activation of node j, wij is the synaptic weight from input i to node j, xi is the activation of input i,α is a scale factor controlling the strength of lateral inhibition, and (v)+ = v if v ≥ 0, (v)+ = 0 otherwise. Thesesimultaneous equations are solved iteratively, with the value of α gradually increasing at each iteration, from aninitial value of zero. Hence, initially each node responds independently to the stimulus, but as α increases thenode activations are modified by competition. Steady-state activity is reached (at large α) when each individualinput contributes to the activation of (at most) a single node.
In order to apply pre-integration lateral inhibition to larger networks a more complex formulation was used
which is suitable for networks containing an arbitrary number of nodes (n) and receiving an arbitrary number ofinputs (m):
This formulation was used to produce all the results presented in this paper. Synaptic weights were normalizedsuch that
ij = 1. The value of α was increased from zero to ten in steps of 0.25. Activation values reached
a steady-state at lower alpha (≈ 2) and remained constant from then on. The step size was found to be immaterialto the final steady-state activation values provided it was less than 0.5.
For the simulation shown in figure a bias was added to the activation of one node. This was implemented by
adding 0.1 to the activation of that node during competition. Experiments showed that this bias could occur at anytime (and for any duration) prior to α reaching a value of 1.5 to generate the same result.
Although results have not been shown here this method is not restricted to working with binary encodings of
input patterns and works equally well with analog encodings. Figure 2: Representing overlapping input patterns. A network consisting of two nodes and three inputs (‘a’, ‘b’, and ‘c’) is wired up so that the first node receives input from ‘a’ and ‘b’ (with weight 1 from
each) and the second node receives input from all three sources (with weight 1 from each). The response
of the network to each possible pattern of inputs is shown. Pre-integration lateral inhibition (lateralweights have been omitted from the figures) enables each node to respond exclusively to its preferredpattern: i.e., either ‘ab’ (110) or ‘abc’ (111). Other input patterns cause a weaker response from that nodewhich has the closest matching preferred input.
In many situations distinct sensory events will share many features in common. If such situations are to be distin-guished it is necessary for different sets of neurons to respond despite this overlap in input features. As a simpleexample, consider the task of representing two overlapping patterns: ‘ab’ and ‘abc’. A network consisting of twonodes receiving input from three sources (labelled ‘a’, ‘b’ and ‘c’) should be sufficient. However, because theseinput patterns overlap, when the pattern ‘ab’ is presented the node representing ‘abc’ will be partially activated,while when the pattern ‘abc’ is presented the node representing ‘ab’ will be fully activated.
When the synaptic weights have certain values both nodes will respond with equal strength to the same pattern.
For example, when the weights are all equal, both nodes will respond to pattern ‘ab’ with equal strength Similarly, when the total synaptic weight from each input is normalized (‘post-synaptic normal-ization’) both nodes will respond equally to pattern ‘ab’ When the total synaptic weight toeach node is normalized (‘pre-synaptic normalization’) both nodes will respond to pattern ‘abc’ with equal activa-tion Under all these conditions the response fails to distinguish between distinct input patternsand post-integration inhibition can do nothing to resolve the situation (and will, in general, result in a node chosenat random winning the competition).
Several solutions to this problem have been suggested. Some require adjusting the activations using a function
of the total synaptic weight received by the node (i.e., using the Webber Law or a maskingfield These solutions scale badly with the number of overlappinginputs, and do not work when (as is common practice in many neural network models) the total synaptic weightto each node is normalized. Other suggestions have involved tailoring the lateral weights to ensure the correctnode wins the competition These methods work well but failto meet other criteria as discussed below.
The most obvious, but most overlooked, solution would be to remove constraints placed on allowable values
for synaptic weights (e.g., normalization) which serve to prevent the input patterns being distinguished in weightspace. It is simple to invent sets of weights which unambiguously classify the two overlapping patterns (e.g., ifboth weights to the node representing ‘ab’ are 0.5 and each weight to the node representing ‘abc’ are 0.4 then eachnode responds most strongly to its preferred pattern and could then successfully inhibit the activation of the othernode).
Using pre-integration lateral inhibition overlapping patterns can be successfully distinguished even when nor-
malization is used (either pre- or post-synaptic normalization). Figure shows the response of such a networkto all possible input patterns. The two networks on the right show that the correct response is generated to inputpatterns ‘ab’ and ‘abc’. The other networks show that when partial input patterns are presented the node whichrepresents the most similar pattern is activated in proportion to the degree of overlap between the partial patternand the preferred input of that node. Hence, when the input is ‘a’ or ‘b’, which partially matches both of the train-ing patterns, then the node representing the smallest pattern responds since these partial patterns are more similarto ‘ab’ than to ‘abc’. When the input is ‘c’ this partially matches only one of the training patterns and hence thenode representing ‘abc’ responds. Similarly, patterns ‘bc’ and ‘ac’ most strongly resemble ‘abc’ and hence causeactivation of that node. Figure 3: Representing multiple, overlapping, input patterns. A network consisting of six nodes and six inputs (‘a’, ‘b’, ‘c’, ‘d’, ‘e’, and ‘f’) is wired up so that nodes receive input from patterns ‘a’, ‘ab’, ‘abc’, ‘cd’, ‘de’, and ‘def’. The response of the network to each of these input patterns is shown on the top row. Pre-integration lateral inhibition (lateral weights have been omitted from the figures) enables each node to respond exclusively to its preferred pattern. In addition, the response to multiple and partial patterns is shown on the bottom row. Pattern ‘abcd’ causes the nodes representing ‘ab’ and ‘cd’ to be active simultaneously, despite the fact that this pattern overlaps strongly with pattern ‘abc’. Input ‘abcde’ is parsed as ‘abc’ together with ‘de’, and input ‘abcdef’ is parsed as ‘abc’ + ‘def’. Input ‘abcdf’ is parsed as ‘abc’ + two-thirds of ‘def’, hence the addition of ‘f’ to the pattern ‘abcd’ radically changes the representation that is generated. Input ‘bcde’ is parsed as two-thirds of ‘abc’ plus pattern ‘de’. Input ‘acef’ is parsed as ‘a’ + one half of ‘cd’ + two-thirds of pattern ‘def’. Multiplicity
While it is sufficient in certain circumstances for a single node to represent the input (local coding) it is desirablein many other situations to have multiple nodes providing a factorial or distributed representation. As an extremelysimple example consider three inputs (‘a’, ‘b’ and ‘c’) each of which is represented by one of three nodes. Anypattern of inputs can be represented by having zero, one or multiple nodes active. In this particular case the inputto the network provides just as good a representation as the output so there is little to be gained. However, thisexample captures the essence of other, more realistic, tasks in which multiple nodes, each of which representmultiple inputs, may need to be active
Post-integration lateral inhibition can be modified to enable multiple nodes to be active
by weakening the strength of the competition between those pairs of nodes that require to be coac-tive (the lateral weights need to reach a compromise strength which provides sufficient competition for distinctpatterns while allowing multiple nodes to respond to multiple patterns). This either requires a priori knowledgeof which nodes will be coactive or the ability to learn appropriate lateral weights. However, information locallyavailable at a synapse is insufficient to determine if the correct compromise weights have been reached and it is thus necessary to add further constraints to derive a learning rule. The proposed constraints requirethat all input patterns occur with equal probability and that pairs of nodes are coactive with equal frequency These constraints severely restrict the class of problems that can be successfullyrepresented to those in which all input patterns are mutually exclusive or in which all pairs of input patterns occursimultaneously with equal frequency. As an example of a case for which these networks would fail, consider usinga single network to represent the color and shape of an object. At any given time only one node (or group of nodes)representing a single color and one node (or group of nodes) representing a single shape should be active. Therethus needs to be strong inhibition between nodes representing properties within the same class, and weak inhibi-tion between nodes representing different properties. This task fails to match the requirements implicitly definedin the learning rules, and application of those rules would lead to weakening of lateral inhibition within each classuntil multiple color nodes and multiple shape nodes were coactive with equal frequency. Hence, post-integrationlateral inhibition, implemented using explicit lateral weights, fails to provide factorial coding except for the ex-ceptional case in which all pairs of patterns co-occur together, or in which external knowledge is available to setappropriate lateral weights.
Networks in which competition is implemented using a selection mechanism can also be modified to allow
multiple nodes to be simultaneously active (e.g., k-winners-takes-all). However, these networks also place restric-tions on the types of task that can be successfully represented to those in which a pre-defined number of nodesneed to be active in response to every pattern of stimuli.
In contrast, pre-integration lateral inhibition places no restrictions on the number of active nodes, nor on the
Figure 4: Representing ambiguous input patterns. A network consisting of two nodes and three inputs (‘a’, ‘b’, and ‘c’) is wired up so that the first node receives input from ‘ab’ and the second node receives input from ‘bc’ (all weights have a value of 1 ). The response of the network to each possible pattern of
inputs is shown. Pre-integration lateral inhibition (lateral weights have been omitted from the figures)suppresses any response to pattern ‘b’ (010) which overlaps equally with each node’s preferred inputpattern. Similarly, when the input is ‘abc’ the ambiguous contribution from input ‘b’ is suppressed sothat both nodes respond at half strength. It can be seen that in other conditions each node responds athalf strength when the input matches half its preferred input, and at full strength when its preferred inputis presented.
frequency which which nodes, or pairs of nodes, are active. Such an network can thus respond appropriately toany combination of input patterns; for example, it can directly solve the problem of representing any arbitrarycombination of the inputs ‘a’, ‘b’ and ‘c’. A more challenging problem is shown in figure Here nodes representsix overlapping patterns. The network responds correctly to each of these patterns and to multiple, overlapping,patterns (even in case where only partial patterns are presented). Ambiguity
In some circumstances there simply is no correct parsing of the input pattern. Consider a neural network with twonodes and three inputs (‘a’, ‘b’ and ‘c’). If one node represents the pattern ‘ab’ and the other represents the pattern‘bc’ then the input ‘b’ is ambiguous since it equally matches the preferred input of both nodes. In this situation,most implementations of post-synaptic lateral inhibition would allow one node, chosen at random, to be activeat half its normal strength. An alternative implementation is to use weaker lateral weights toenable both nodes to respond with one-quarter of the maximum response However,this approach is also unsatisfactory since it suggests that one-quarter of each pattern is present, when this is notthe case. Neither of these activity patterns seem to provide an appropriate representation. Any response in whichboth nodes generate equal activity suggests that a single piece of data provides evidence for two interpretationssimultaneously. While any response in which one node has higher activity than the other is making an unjustified,arbitrary, selection. Pre-integration lateral inhibition avoids generating responses that are not justified by theavailable data by preventing any response (Figure It thus produces no representation of the input rather than apotentially misleading representation.
As an example of a situation in which such an approach would be advantageous consider again using a net-
work to represent the color and shape of an object. However, in this situation the network is wired up to generatelocalist representations of conjunctions of color and shape from a distributed input representation of these sep-arate features. For example, consider a network with four nodes representing ‘black-squares’, ‘white-squares’,‘black-triangles’ and ‘white-triangles’ (with the inputs to this network signaling ‘black’, ‘white’, ‘square’ and‘triangle’). In this case the ambiguous situation occurs when multiple objects are presented to the network simul-taneously: a black-square and a white-triangle would cause an identical input pattern as a black-triangle and awhite-square Given such a situation it is important to prevent illusory conjunctions from beingrepresented pre-integration lateral inhibition does so by suppressing all responses (Fig-ure One solution to this ‘binding’ problem would be the action of expectation or attention in disambiguatingthe situation If such modulatory effects are modeled byadding a small increase to the activity of one node during competition then this succeeds in causing a responsefrom those nodes compatible with the biased interpretation, while suppressing activity in the other two nodes(Figure A similar bias applied to a network using post-integration inhibition would cause the biased node to bethe most active, but would also suppress the response of the node representing the second object. An alternativesolution would be for inputs representing the features of one object to be active simultaneously but out-of-phasewith those inputs representing the other object In this casethe network succeeds (as would a network using the standard method of competition) by responding alternately tothe non-ambiguous patterns generated by each individual object presented in isolation. Figure 5: Representing feature conjunctions. A network consisting of four nodes and four inputs (‘black’, ‘white’, ‘square’ and ‘triangle’) is wired up so that the first node receives input from ‘black- square’, the second from ‘white-square’ the third from ‘black-triangle’ and the fourth from ‘white- triangle’ (all weights have a value of 1 ). The first four figures in the top row show the response of
the network to valid conjunctions of features from a single object. The last figure in the top row showsthe response to an ambiguous input that could either be caused by the presentation of a black-square anda white-triangle, or by a black-triangle and a white-square. The second row shows responses to the sameinputs as used in first row, but with the first node (which represents ‘black-squares’) receiving a smallbias input during competition. It can be seen that for input patterns where activation of the first node isnot justified by the input the bias has no effect on the outcome. However, for the ambiguous case the biascauses a parsing of the input into ‘black-square’ + ‘white-triangle’. Discussion
The above examples have shown that pre-integration lateral inhibition provides useful computational capacitiesthat can not be generated using post-integration lateral inhibition. A network of neurons competing throughpre-integration lateral inhibition is thus capable of generating correct representations based on the ‘knowledge’stored in the synaptic weights of the neural network. Specifically, it is capable of generating a local encoding ofindividual input patterns as well as responding simultaneously to multiple patterns, when they are present, in orderto generate a factorial or distributed encoding. It can produce an appropriate representation even when patternsoverlap. It is able to respond to partial patterns such that the response is proportional to how well that inputmatches the stored pattern, and it can detect ambiguities and suppress responses to them.
Our algorithm simplifies reality by assuming that the role of inhibitory cells can be approximated by direct
inhibitory weights from excitatory cells, and that these lateral weights have the same strength as correspondingafferent weights. The latter simplification can be justified since weights that have identical values also have iden-tical pre- and post-synaptic activation values and hence could be learnt independently. Such a learning mechanismwould require inhibitory synapses contacting the dendrite to be modified as a function of the local dendritic activ-ity rather than the output activity of the inhibited cell. More complex models, which include a separate inhibitorycell population, and which use multi-compartmental models of dendritic processes could relate our proposal moredirectly with physiology. We hope that our demonstration of the computational and representational advantagesthat could arise from dendritic inhibition will serve to stimulate such more detailed studies.
Computational considerations have led us to suggest that competition via dendritic inhibition could signifi-
cantly enhance the information processing capacities of networks of cortical neurons. This claim is anatomicallyplausible since it has been shown that cortical pyramidal cells innervate inhibitory cell types which in turn formsynapses on the dendrites of pyramidal cells However, determining thefunctional role of these connections will require further experimental evidence. Our model predicts that it shouldbe possible to find pairs of cortical pyramidal cells for which action potentials generated by one cell induce in-hibitory post-synaptic potentials within the dendrites of the other. Independent of such experimental support, thealgorithm we have presented could have immediate advantages for a great number of neural network applicationsin a huge variety of fields. Acknowledgements
This work was funded by MRC Research Fellowship number G81/512. References
Borg-Graham LT, Monier C, Fregnac, Y (1998) Visual input evokes transient and strong shunting inhibition in
visual cortical neurons. Nature 393(6683):369–373.
Buhl EH, Tamas G, Szilagyi T, Stricker C, Paulsen O, Somogyi P (1997) Effect, number and location of synapses
made by single pyramidal cells onto aspiny interneurones of cat visual cortex. J Physiol 500(3):689–713.
Cohen MA, Grossberg S. (1987) Masking fields: a massively parallel neural architecture for learning, recognizing,
and predicting multiple groupings of patterned data. Appl Optics 26:1866–1891.
F¨oldi´ak P (1989) Adaptive network for optimal linear feature extraction. In: Proceedings of the IEEE/INNS
International Joint Conference on Neural Networks, volume 1, pp. 401–405. New York: IEEE Press.
F¨oldi´ak P (1990) Forming sparse representations by local anti-Hebbian learning. Biol Cybern 64:165–170. F¨oldi´ak P (1991) Learning invariance from transformation sequences. Neural Comput 3:194–200. Gray CM (1999) The temporal correlation hypothesis of visual feature integration: still alive and well. Neuron
Grossberg S (1987) Competitive learning: from interactive activation to adaptive resonance. Cognitive Sci 11:23–
H¨ausser M (2001) Synaptic function: dendritic democracy. Curr Biol 11:R10–12. H¨ausser M, Spruston N, Stuart GJ (2000) Diversity and dynamics of dendritic signalling. Science 290(5492):739–
Hertz J, Krogh A, Palmer RG (1991) Introduction to the Theory of Neural Computation. Redwood City, Califor-
Kim HG, Beierlein M, Connors BW (1995) Inhibitory control of excitable dendrites in neocortex. J Neurophysiol
Koch C, Poggio T, Torre V (1983) Nonlinear interactions in a dendritic tree: localization, timing, and role in
information processing. Proc Natl Acad Sci USA 80(9):2799–2802.
Koch K, Segev I (2000) The role of single neurons in information processing. Nature Neurosci [Suppl] 3:1171–
Kohonen T (1997) Self-Organizing Maps. Berlin: Springer. Marshall JA (1995) Adaptive perceptual pattern recognition by self-organizing neural networks: context, uncer-
tainty, multiplicity, and scale. Neural Networks 8(3):335–362.
Marshall JA, Gupta VS (1998) Generalization and exclusive allocation of credit in unsupervised category learning.
Network - Comput Neural Syst 9(2):279–302.
Mel BW (1993) Synaptic integration in an excitable dendritic tree. J Neurophysiol 70(3):1086–1101. Mel BW (1994) Information processing in dendritic trees. Neural Comput 6:1031–1085. Mel BW (1999) Why have dendrites? A computational perspective. In: Dendrites (Stuart G, Spruston N, H¨ausser
M, eds.), pp. 271–289. Oxford: Oxford University Press.
Mountcastle VB (1998) Perceptual Neuroscience: The Cerebral Cortex. Cambridge, Massachusetts: Harvard
Nigrin A (1993) Neural Networks for Pattern Recognition. Cambridge, Massachusetts: MIT Press. Oja E (1989) Neural networks, principle components, and subspaces. Int J Neural Syst 1:61–68. O’Reilly RC (1998) Six principles for biologically based computational models of cortical cognition. Trends Cog
Rall W (1964) Theoretical significance of dendritic trees for neuronal input-output relations. In: Neural Theory
and Modeling (Reiss RF, ed.), pp. 73–97. Stanford, California: Stanford University Press.
Reynolds JH, Desimone R (1999) The role of neural mechanisms of attention in solving the binding problem.
Ritter H, Martinetz T, Schulten K (1992) Neural Computation and Self-Organizing Maps. An Introduction. Read-
Rockland KS (1998) Complex microstructures of sensory cortical connections. Curr Opin Neurobiol 8:545–551. Roelfsema PR, Lamme VAF, Spekreijse H (2000) The implementation of visual routines. Vision Res 40:1385–
Rumelhart DE, Zipser D (1985) Feature discovery by competitive learning. Cognitive Sci 9:75–112. Sanger TD (1989) Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural
Segev I (1995) Dendritic processing. In: The Handbook of Brain Theory and Neural Networks (Arbib MA, ed.),
pp. 282–289. Cambridge, Massachusetts: MIT Press.
Segev I, Rall W (1998) Excitable dendrites and spines: earlier theoretical insights elucidate recent direct observa-
tions. Trends Neurosci 21(11):453–460.
Singer W (1999) Neuronal synchrony: a versatile code for the definition of relations? Neuron 24(1):49–65. Sirosh J, Miikkulainen R (1994) Cooperative self-organization of afferent and lateral connections in cortical maps.
Somogyi P, Martin KAC (1985) Cortical circuitry underlying inhibitory processes in cat area 17. In: Models of
the Visual Cortex (Rose D, Dobson VG, eds.), chpt 54. Chichester: Wiley.
Spratling MW (1999) Artificial Ontogenesis: A Connectionist Model of Development. PhD thesis, Department
of Artificial Intelligence, University of Edinburgh.
Swindale NV (1996) The development of topography in the visual cortex: a review of models. Network - Comput
Tamas G, Buhl EH, Somogyi P (1997) Fast IPSPs elicited via multiple synaptic release sites by different types of
GABAergic neurone in the cat visual cortex. J Physiol 500(3):715–738.
Thorpe SJ (1995) Localized versus distributed representations. In: The Handbook of Brain Theory and Neural
Networks (Arbib MA, ed.), pp. 549–552. Cambridge, Massachusetts: MIT Press.
von der Malsburg C (1973) Self-organisation of orientation sensitive cells in the striate cortex. Kybernetik 14:85–
von der Malsburg C (1981) The correlation theory of brain function. Technical Report 81-2, Max-Planck-Institute
Wallis G (1996) Using spatio-temporal correlations to learn invariant object recognition. Neural Networks
WEED SCIENCE RESEARCH A quarterly research journal of weeds and medicinal herbs Weed Science Society of Pakistan Department of Weed Science NWFP Agricultural University, Peshawar-25130, Pakistan Ph.92-91-9216542/9218206/9216550; Fax: 92-91-9216520 Pak. J. Weed Sci. Res. 15 (2-3): 191-198, 2009 EFFICACY OF VARIOUS HERBICIDES AGAINST WEEDS AND THEIR IMPACT ON YIELD OF