We're excited to announce Neural DSL 0.2.0 - a major update focused on error prevention and developer experience for deep learning workflows. This release introduces granular validation, smarter debugging tools, and significant quality-of-life improvements for neural network development.
🚀 What's New in 0.2.0
1. Semantic Error Validation Engine
Catch configuration errors before runtime with our new validation system:
# Now throws ERROR: "Dropout rate must be ≤ 1.0"
Dropout(1.5)
# ERROR: "Conv2D filters must be positive"
Conv2D(filters=-32, kernel_size=(3,3))
# WARNING: "Dense(128.0) → units coerced to integer"
Dense(128.0, activation="relu")
Key validation rules:
- Layer parameter ranges (0 ≤ dropout ≤ 1)
- Positive integer checks (filters, units, etc.)
- Framework-specific constraints
- Custom error severity levels (ERROR/WARNING/INFO)
2. Enhanced CLI Experience
# New dry-run mode
neural compile model.neural --dry-run
# Step debugging
neural debug model.neural --step
# Launch GUI dashboard
neural no-code --port 8051
CLI Improvements:
- Structured logging with
--verbose
- Progress bars for long operations
- Cached visualizations (30% faster repeats)
- Unified error handling across commands
3. Debugging Superpowers with NeuralDbg
New debugging features:
# Gradient flow analysis
neural debug model.neural --gradients
# Find inactive neurons
neural debug model.neural --dead-neurons
# Interactive step debugging
neural debug model.neural --step
Debugging Capabilities:
- Real-time memory/FLOP profiling
- Layer-wise execution tracing
- NaN/overflow detection
- Interactive tensor inspection
🛠 Migration Guide
Breaking Changes
- TransformerEncoder now requires explicit parameters:
# Before (v0.1.x)
TransformerEncoder()
# Now (v0.2.0)
TransformerEncoder(num_heads=8, ff_dim=512) # Default values
- Stricter validation - previously warnings now error by default
🚀 Getting Started
pip install neural-dsl==0.2.0
Quick Example (MNIST Classifier):
# mnist.neural
network MNISTClassifier {
input: (28, 28, 1)
layers:
Conv2D(32, (3,3), activation="relu")
MaxPooling2D(pool_size=(2,2))
Flatten()
Dense(128, activation="relu")
Dropout(0.5)
Output(10, activation="softmax")
train {
epochs: 15
batch_size: 64
validation_split: 0.2
}
}
Compile to framework code:
neural compile mnist.neural --backend pytorch
📊 Benchmarks
Operation | v0.1.1 | v0.2.0 | Improvement |
---|---|---|---|
Validation Time | 142ms | 89ms | 1.6x faster |
Error Message Quality | 6.8/10 | 9.1/10 | 34% clearer |
Debug Setup Time | 8min | 2min | 4x faster |
🛠 Under the Hood
Key Technical Improvements:
- Lark parser upgrades with position tracking
- Type coercion system with warnings
- Unified error handling architecture
- CI/CD pipeline hardening (100% test coverage)
🤝 Community & Resources
Try Neural DSL 0.2.0 today and let us know what you build! 🚀