# Neural Network Fundamentals
Neural networks are computational models inspired by biological neural networks. This page covers the fundamental concepts and architectures.
## Introduction
Neural networks consist of interconnected nodes (neurons) organized in layers. Each connection has a weight that is adjusted during training.
## Basic Architecture
### Layers
- **Input Layer**: Receives input data
- **Hidden Layers**: Process information
- **Output Layer**: Produces final results
### Neurons
Each neuron:
1. Receives weighted inputs
2. Applies an activation function
3. Produces an output
## Activation Functions
Common activation functions include:
- **Sigmoid**: Smooth S-shaped curve
- **ReLU**: Rectified Linear Unit, most common
- **Tanh**: Hyperbolic tangent
- **Softmax**: For multi-class classification
## Training Process
### Forward Propagation
Data flows from input to output through the network.
### Backpropagation
The error is propagated backward to update weights using gradient descent.
### Loss Functions
- **Mean Squared Error (MSE)**: For regression
- **Cross-Entropy**: For classification
## Applications
Neural networks are used in:
- Image recognition
- Natural language processing
- Speech recognition
- Game playing (e.g., AlphaGo)
## Conclusion
Understanding neural network fundamentals is crucial for working with deep learning and modern AI systems.