Posts by Tags

Attention

Program Memory: Method (part 2)

4 minute read

Published:

When a human programmer codes, he often uses core libraries to construct his programs. Most of the time, the program memory stores these static libraries, and let big programs be created dynamically during computation. The libraries are unitary components constructing bigger programs. Maintaining small and functionally independent sub-programs such as libraries encourage program utilisations since an immense program must refer to different libraries to complete its task. Indeed, it also eliminates redundancies as the stored programs-the core libraries are not overlapping each other. Read more

Program Memory: Method (part 1)

6 minute read

Published:

A neural network uses its weight to compute inputs and return outputs as computation results. Hence, the weight can be viewed as the neural network’s program. If we maintain a program memory of different weights responsible for various computation functions, we have a neural Universal Turing Machine. Obvious scenarios where a Program Memory may help: Read more

Causality

Cognitive architecture

Neural Memory Architecture

2 minute read

Published:

Memory is the core of intelligence. Thanks to memory, human can effortlessly recognize objects, recall past events, plan the future, explain surrounding environments and reason from facts. From cognitive perspective, memory can take many forms and functionalities (see figure below). Read more

Conditional computing

Program Memory: Method (part 2)

4 minute read

Published:

When a human programmer codes, he often uses core libraries to construct his programs. Most of the time, the program memory stores these static libraries, and let big programs be created dynamically during computation. The libraries are unitary components constructing bigger programs. Maintaining small and functionally independent sub-programs such as libraries encourage program utilisations since an immense program must refer to different libraries to complete its task. Indeed, it also eliminates redundancies as the stored programs-the core libraries are not overlapping each other. Read more

Program Memory: Method (part 1)

6 minute read

Published:

A neural network uses its weight to compute inputs and return outputs as computation results. Hence, the weight can be viewed as the neural network’s program. If we maintain a program memory of different weights responsible for various computation functions, we have a neural Universal Turing Machine. Obvious scenarios where a Program Memory may help: Read more

Program Memory: Introduction

4 minute read

Published:

Memory-augmented neural networks (MANNs) store data in their external memory, resembling Turning Machines. Despite being theoretically Turing-Complete, MANNs cannot be trained flexibly to solve any task due to the lack of program memory. Without storing programs, it is hard to perform complicated tasks such as simulating recursive functional calls or implementing divide-and-conquer algorithms. As long as programs are not treated as data, the computing capability of neural networks is still limited. Read more

Conformal Prediction

DPO

Duality

Multi-memory Architecture

1 minute read

Published:

Imagine this, you only have short-term memory. You can only remember what happen during the day, and when you wake up, your mind refreshes. Without long-term memory, you cannot remember even your birthday, your last month’s payment or where had you been last week. To survive, you must note down every thing, and re-learn these facts every morning. That would be so inconvenient for dementia patients who suffer this kind of disease. In the same vein, if your mind only revolves around System 2 (slow and sophisticated), you will fail to think quickly and intuitively, and process life events effortfully. No matter how simple System 1, it is required and stands apart from System 2. It seems like a good model of memory should treat the memory as a bunch of modules, representing different functions and collaborating to deliver the desired outcome. Inspired by this observations, I wrote couple of papers based on multi-memory systems that analyze various aspects of memory functions such as item-relational storage, view/channel fusion, information encoder-decoder. Read more

Episodic Memory

Lean Reinforcement Learning

1 minute read

Published:

Despite huge successes in breaking human records, current training of RL agents is prohibitively expensive in terms of time, GPUs, and samples. For example, it requires hundreds of millions or even billions of environment steps to reach human-level performance on Atari games-a common benchmark in modern RL. That is only doable with simulation, not real-world problems like robotics or industrial planning. The problem of sample-inefficiency is exacerbated in real environments, which can be stochastic, partially observable, noisy or long-term. Another issue is model complexity. RL algorithms are getting more complicated, coupled with numerous hyperparameters that need to be tuned carefully. That again accelerates the cost of training RL agents. Read more

Exploration

Finetuning

Generalization

Generative model

HITL

Hallucination

LLM

LSTM

Large Language Model

Life-long learning

Program Memory: Introduction

4 minute read

Published:

Memory-augmented neural networks (MANNs) store data in their external memory, resembling Turning Machines. Despite being theoretically Turing-Complete, MANNs cannot be trained flexibly to solve any task due to the lack of program memory. Without storing programs, it is hard to perform complicated tasks such as simulating recursive functional calls or implementing divide-and-conquer algorithms. As long as programs are not treated as data, the computing capability of neural networks is still limited. Read more

MANN

Memorization

Memory

Memory in Reinforcement Learning: Overview

3 minute read

Published:

Memory is just storage. Whenever computation needs to store interim results, it must ask for memory. This fundamental principle applies to any scenario where memory is required, yet a closer interpretation of memory’s role in each domain reveals a different understanding of its functionality and benefit. Read more

Neural Memory Architecture

2 minute read

Published:

Memory is the core of intelligence. Thanks to memory, human can effortlessly recognize objects, recall past events, plan the future, explain surrounding environments and reason from facts. From cognitive perspective, memory can take many forms and functionalities (see figure below). Read more

Multi-modality

Multi-memory Architecture

1 minute read

Published:

Imagine this, you only have short-term memory. You can only remember what happen during the day, and when you wake up, your mind refreshes. Without long-term memory, you cannot remember even your birthday, your last month’s payment or where had you been last week. To survive, you must note down every thing, and re-learn these facts every morning. That would be so inconvenient for dementia patients who suffer this kind of disease. In the same vein, if your mind only revolves around System 2 (slow and sophisticated), you will fail to think quickly and intuitively, and process life events effortfully. No matter how simple System 1, it is required and stands apart from System 2. It seems like a good model of memory should treat the memory as a bunch of modules, representing different functions and collaborating to deliver the desired outcome. Inspired by this observations, I wrote couple of papers based on multi-memory systems that analyze various aspects of memory functions such as item-relational storage, view/channel fusion, information encoder-decoder. Read more

Multi-view

Multi-memory Architecture

1 minute read

Published:

Imagine this, you only have short-term memory. You can only remember what happen during the day, and when you wake up, your mind refreshes. Without long-term memory, you cannot remember even your birthday, your last month’s payment or where had you been last week. To survive, you must note down every thing, and re-learn these facts every morning. That would be so inconvenient for dementia patients who suffer this kind of disease. In the same vein, if your mind only revolves around System 2 (slow and sophisticated), you will fail to think quickly and intuitively, and process life events effortfully. No matter how simple System 1, it is required and stands apart from System 2. It seems like a good model of memory should treat the memory as a bunch of modules, representing different functions and collaborating to deliver the desired outcome. Inspired by this observations, I wrote couple of papers based on multi-memory systems that analyze various aspects of memory functions such as item-relational storage, view/channel fusion, information encoder-decoder. Read more

Online learning

Optimal

Pointer

Preference Learning

RL

Lean Reinforcement Learning

1 minute read

Published:

Despite huge successes in breaking human records, current training of RL agents is prohibitively expensive in terms of time, GPUs, and samples. For example, it requires hundreds of millions or even billions of environment steps to reach human-level performance on Atari games-a common benchmark in modern RL. That is only doable with simulation, not real-world problems like robotics or industrial planning. The problem of sample-inefficiency is exacerbated in real environments, which can be stochastic, partially observable, noisy or long-term. Another issue is model complexity. RL algorithms are getting more complicated, coupled with numerous hyperparameters that need to be tuned carefully. That again accelerates the cost of training RL agents. Read more

RLHF

Reasoning

Reinforcement Learning

Memory in Reinforcement Learning: Overview

3 minute read

Published:

Memory is just storage. Whenever computation needs to store interim results, it must ask for memory. This fundamental principle applies to any scenario where memory is required, yet a closer interpretation of memory’s role in each domain reveals a different understanding of its functionality and benefit. Read more

Relational

Sample-efficiency

Lean Reinforcement Learning

1 minute read

Published:

Despite huge successes in breaking human records, current training of RL agents is prohibitively expensive in terms of time, GPUs, and samples. For example, it requires hundreds of millions or even billions of environment steps to reach human-level performance on Atari games-a common benchmark in modern RL. That is only doable with simulation, not real-world problems like robotics or industrial planning. The problem of sample-inefficiency is exacerbated in real environments, which can be stochastic, partially observable, noisy or long-term. Another issue is model complexity. RL algorithms are getting more complicated, coupled with numerous hyperparameters that need to be tuned carefully. That again accelerates the cost of training RL agents. Read more

Stored-program

Program Memory: Method (part 2)

4 minute read

Published:

When a human programmer codes, he often uses core libraries to construct his programs. Most of the time, the program memory stores these static libraries, and let big programs be created dynamically during computation. The libraries are unitary components constructing bigger programs. Maintaining small and functionally independent sub-programs such as libraries encourage program utilisations since an immense program must refer to different libraries to complete its task. Indeed, it also eliminates redundancies as the stored programs-the core libraries are not overlapping each other. Read more

Program Memory: Method (part 1)

6 minute read

Published:

A neural network uses its weight to compute inputs and return outputs as computation results. Hence, the weight can be viewed as the neural network’s program. If we maintain a program memory of different weights responsible for various computation functions, we have a neural Universal Turing Machine. Obvious scenarios where a Program Memory may help: Read more

Program Memory: Introduction

4 minute read

Published:

Memory-augmented neural networks (MANNs) store data in their external memory, resembling Turning Machines. Despite being theoretically Turing-Complete, MANNs cannot be trained flexibly to solve any task due to the lack of program memory. Without storing programs, it is hard to perform complicated tasks such as simulating recursive functional calls or implementing divide-and-conquer algorithms. As long as programs are not treated as data, the computing capability of neural networks is still limited. Read more

Time-series

Turing machine

Neural Memory Architecture

2 minute read

Published:

Memory is the core of intelligence. Thanks to memory, human can effortlessly recognize objects, recall past events, plan the future, explain surrounding environments and reason from facts. From cognitive perspective, memory can take many forms and functionalities (see figure below). Read more

Uncertainty

Universal Turing Machine

Program Memory: Method (part 2)

4 minute read

Published:

When a human programmer codes, he often uses core libraries to construct his programs. Most of the time, the program memory stores these static libraries, and let big programs be created dynamically during computation. The libraries are unitary components constructing bigger programs. Maintaining small and functionally independent sub-programs such as libraries encourage program utilisations since an immense program must refer to different libraries to complete its task. Indeed, it also eliminates redundancies as the stored programs-the core libraries are not overlapping each other. Read more

Program Memory: Method (part 1)

6 minute read

Published:

A neural network uses its weight to compute inputs and return outputs as computation results. Hence, the weight can be viewed as the neural network’s program. If we maintain a program memory of different weights responsible for various computation functions, we have a neural Universal Turing Machine. Obvious scenarios where a Program Memory may help: Read more

Program Memory: Introduction

4 minute read

Published:

Memory-augmented neural networks (MANNs) store data in their external memory, resembling Turning Machines. Despite being theoretically Turing-Complete, MANNs cannot be trained flexibly to solve any task due to the lack of program memory. Without storing programs, it is hard to perform complicated tasks such as simulating recursive functional calls or implementing divide-and-conquer algorithms. As long as programs are not treated as data, the computing capability of neural networks is still limited. Read more

VAE