Yerkes-Dodson Law in agents' training

Šarunas Raudys, Viktoras Justickis

    Research output: Contribution to journalArticle

    6 Citations (Scopus)

    Abstract

    Well known Yerkes-Dodson Law (YDL) claims that medium intensity stimulation encourages fastest learning. Mostly experimenters explained YDL by sequential action of two different processes. We show that YDL can be elucidated even with such simple model as nonlinear single layer perceptron and gradient descent training where differences between desired outputs values are associated with stimulation strength. Non-linear nature of curves "a number of iterations is a function of stimulation" is caused by smoothly bounded nonlinearities of the perceptron's activation function and a difference in desired outputs.

    Original languageEnglish
    Pages (from-to)54-58
    Number of pages5
    JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume2902
    Publication statusPublished - 2003

      Fingerprint

    Keywords

    • Adaptation
    • Intelligent Agents
    • Stimulation
    • Y-D Law

    ASJC Scopus subject areas

    • Computer Science(all)
    • Biochemistry, Genetics and Molecular Biology(all)
    • Theoretical Computer Science
    • Engineering(all)

    Cite this