Creating your own Tiny Language Model (TLM) can be both an exciting and educational project. While Ruby on Rails isn’t traditionally associated with machine learning, it’s perfectly possible to build a simple and functional TLM using Ruby-based tools and Rails for the web interface. In this tutorial, we’ll walk through setting up a small neural language model, training it on a sample dataset, and deploying it using Rails.
What Is a Tiny Language Model?
A Tiny Language Model (TLM) is a stripped-down version of a language model like GPT. While full-scale LLMs are trained on massive datasets with billions of parameters, TLMs are minimal neural networks capable of learning basic word or character-level language patterns. Their lightweight nature makes them ideal for educational purposes and small-scale applications.
Tools and Libraries You’ll Need
We’ll be using the following tools:
-
Ruby on Rails – Web framework for building the front-end and API.
-
Ruby (>= 3.0) – Language for server-side logic and model code.
-
Numo::NArray – Ruby’s equivalent of NumPy, for handling tensor operations.
-
GnuplotRB – For plotting loss curves (optional but helpful).
-
SQLite/Postgres – To store training data and model parameters.
Install essential gems in your Gemfile:
Run:
Setting Up the Rails Project
First, create a new Rails application:
Generate a scaffold for the training data:
Add some sample text data via the Rails console or interface. For instance:
Tokenization and Preprocessing
Let’s write a tokenizer that maps each character to a unique index and vice versa. Create a new Ruby file at lib/tokenizer.rb
:
Load and preprocess your corpus:
Building a Simple Neural Net in Ruby
Here’s a minimal RNN-like model using Numo::NArray. Create this in lib/tiny_model.rb
.
Training Loop
Now that we have a model and data, let’s train it.
This isn’t backpropagation yet—we’re just building the forward pass and calculating loss to show you the structure.
You can expand this with proper gradient calculation using backpropagation through time (BPTT) or implement more advanced optimizers later.
Generating Text
Let’s generate some basic text from the trained model:
Rails Interface
To expose your TLM via a Rails controller:
Then in app/controllers/model_controller.rb
:
Define generate_text
by copying the logic from generate.rb
.
Adding a Front-End
Add a simple form in your Rails view (app/views/model/generate.html.erb
) to let users input a starting character and get generated text.
Conclusion
Building a Tiny Language Model (TLM) from scratch using Ruby on Rails may sound unconventional, but it provides an incredibly rich, hands-on learning experience that can deepen your understanding of how language models work at a fundamental level. Unlike using high-level APIs or relying on pre-trained models with millions of parameters, constructing a model yourself allows you to see the inner mechanics—tokenization, vector representations, matrix operations, and simple neural network design—all of which form the foundation of modern AI.
Through this project, we’ve demonstrated that Ruby, typically known for web development, is entirely capable of supporting machine learning concepts when paired with the right gems like numo-narray
. While it lacks the vast ML ecosystem of Python, it makes up for it in elegance, readability, and ease of use—perfect for experimentation and teaching.
We started by creating a corpus and designing a basic character-level tokenizer, which introduced the concept of mapping symbols (characters) to numeric indices and back—essential for feeding text into a neural network. From there, we constructed a minimal neural architecture that mimicked a simple recurrent neural network (RNN), implemented the forward pass logic, and measured loss to get feedback on how well the model was learning the sequence relationships.
Even without full backpropagation, our model could still generate plausible sequences of text based on learned character patterns. This shows that even a naive implementation can capture basic sequential dependencies and generate coherent (if simple) outputs. For beginners or those transitioning into AI, this kind of learning-by-building approach is far more impactful than simply calling APIs.
Integrating the model into a Ruby on Rails application added a valuable dimension to this project. It transformed a standalone script into an interactive tool that can be accessed through a web interface or RESTful API. This is particularly useful for anyone who wants to build ML-backed products or prototypes directly in Ruby without switching stacks. Rails’ MVC architecture makes it easy to manage user input, store model data, and present generated text—all within a structured and scalable framework.
Looking forward, this project can be extended in many exciting ways. For example:
-
Backpropagation & Training Loops: Implementing a full backpropagation through time (BPTT) would allow the model to actually learn and improve over epochs, minimizing the prediction error systematically.
-
Word-Level Modeling: By switching from characters to words, you could scale the model to generate grammatically richer and semantically meaningful sentences.
-
Persistent Model State: Saving weights and training states to the database would enable continued training across sessions or deployment to production environments.
-
Interactive Front-End: Using Rails’ Hotwire or StimulusJS to create real-time interfaces for generating and editing text would make your TLM much more engaging and responsive.
-
Deployable API: The model could be wrapped in a Rails API-only app and served to other applications or devices—perhaps even as a microservice within a larger ecosystem.
From an educational standpoint, this project demonstrates how machine learning concepts can transcend the typical Python ecosystem. It opens the door for Ruby developers who are curious about AI but hesitant to dive into a completely new language or stack. By lowering that barrier, we make ML more accessible and less intimidating.
In summary, building a Tiny Language Model in Ruby on Rails is not just a programming exercise—it’s a multidisciplinary journey. It merges natural language processing, numerical computing, neural network architecture, and full-stack web development into one cohesive experience. Whether you’re a Rubyist wanting to dabble in AI, a student looking to understand the math behind language models, or a maker who just loves building cool things from the ground up—this project offers the perfect launchpad.