Home ML / AI Introduction to TensorFlow — What It Is and How It Works

Introduction to TensorFlow — What It Is and How It Works

⚡ Quick Answer
TensorFlow is like a very powerful calculator that can learn from data. Imagine teaching a child to recognise cats by showing them thousands of cat photos. TensorFlow does the same thing for computers — you show it examples, it adjusts its internal settings (called weights) thousands of times until it gets good at recognising the pattern. The 'flow' in TensorFlow refers to how data flows through a graph of mathematical operations.

TensorFlow is Google's open-source machine learning framework and one of the most widely used tools in AI development. But many tutorials throw you into model building without explaining what TensorFlow actually is — and that confusion compounds as complexity grows.

At its core, TensorFlow is a library for numerical computation using data flow graphs. Tensors are multi-dimensional arrays that flow through a graph of mathematical operations. That's it. The machine learning part is built on top of this foundation.

By the end of this article you'll understand tensors, how TensorFlow executes computations, and you'll train your first neural network on real data.

What Is a Tensor?

A tensor is just a multi-dimensional array with a fixed data type. A scalar (single number) is a 0D tensor. A vector (list of numbers) is a 1D tensor. A matrix is a 2D tensor. A batch of colour images (batch_size, height, width, channels) is a 4D tensor. Every piece of data in TensorFlow — inputs, weights, outputs — is a tensor.

tensors_demo.py · PYTHON
1234567891011121314151617
import tensorflow as tf

# 0D tensor — a single number
scalar = tf.constant(42)
print("Scalar:", scalar)          # tf.Tensor(42, shape=(), dtype=int32)

# 1D tensor — a vector
vector = tf.constant([1.0, 2.0, 3.0])
print("Vector shape:", vector.shape)  # (3,)

# 2D tensor — a matrix
matrix = tf.constant([[1, 2], [3, 4], [5, 6]])
print("Matrix shape:", matrix.shape)  # (3, 2)

# TensorFlow operations — lazy evaluation in TF2 is eager by default
result = tf.matmul(tf.constant([[1.0, 2.0]]), tf.constant([[3.0], [4.0]]))
print("Matrix multiply result:", result.numpy())  # [[11.0]]
▶ Output
Scalar: tf.Tensor(42, shape=(), dtype=int32)
Vector shape: (3,)
Matrix shape: (3, 2)
Matrix multiply result: [[11.]]
🔥
TF1 vs TF2:TensorFlow 2.x uses eager execution by default — operations run immediately and return values you can inspect. TF1 required building a static graph first, then running a Session. Always use TF2 for new projects.

Building and Training Your First Model

TensorFlow's high-level Keras API makes building neural networks straightforward. You define layers, compile the model with a loss function and optimizer, then call fit(). TensorFlow handles backpropagation, gradient calculation, and weight updates automatically.

first_model.py · PYTHON
1234567891011121314151617181920212223
import tensorflow as tf
import numpy as np

# Generate simple training data: y = 2x + 1
x_train = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=float)
y_train = np.array([3, 5, 7, 9, 11, 13, 15, 17], dtype=float)  # 2x + 1

# Build model: one dense layer with one neuron = learns y = wx + b
model = tf.keras.Sequential([
    tf.keras.layers.Dense(units=1, input_shape=[1])
])

# Compile: mean squared error loss, SGD optimizer
model.compile(optimizer='sgd', loss='mean_squared_error')

# Train for 500 epochs
model.fit(x_train, y_train, epochs=500, verbose=0)

# Predict: should output ~2*10 + 1 = 21
prediction = model.predict([10.0])
print(f"Prediction for x=10: {prediction[0][0]:.2f}")
print(f"Learned weight: {model.layers[0].get_weights()[0][0][0]:.4f}")  # Should be ~2.0
print(f"Learned bias:   {model.layers[0].get_weights()[1][0]:.4f}")     # Should be ~1.0
▶ Output
Prediction for x=10: 21.00
Learned weight: 1.9998
Learned bias: 1.0009
⚠️
What Just Happened:The model started with random weights, predicted outputs, calculated how wrong it was (loss), then adjusted weights to be less wrong. It did this 500 times. That's machine learning — nothing magic, just optimisation.
FeatureTensorFlow / KerasPyTorch
Primary useProduction + researchResearch-first, growing in production
ExecutionEager (TF2) + graph compilationEager by default
DeploymentTF Serving, TFLite, TF.jsTorchServe, ONNX
Learning curveGentler with Keras APIMore Pythonic, flexible
Industry adoptionDominant in productionDominant in research papers

🎯 Key Takeaways

  • A tensor is a multi-dimensional array — the fundamental data structure in TensorFlow
  • TensorFlow 2.x uses eager execution by default — no more Sessions or static graphs
  • Keras is TensorFlow's high-level API — use Sequential for simple models, Functional API for complex ones
  • Training is just iterative optimisation: predict, measure error, adjust weights, repeat

⚠ Common Mistakes to Avoid

  • Mistake 1: Mixing TF1 and TF2 patterns — tf.Session(), tf.placeholder(), and tf.global_variables_initializer() are TF1 patterns. In TF2 they either don't exist or do nothing. Use tf.function() for graph compilation in TF2.
  • Mistake 2: Forgetting to normalise input data — neural networks train much faster and more stably when inputs are in the range [0, 1] or [-1, 1]. Raw pixel values (0-255) or unnormalised features cause very slow or failed training.
  • Mistake 3: Using .fit() on the entire dataset every epoch instead of batches — for large datasets, always use the batch_size parameter. Loading everything into memory at once will crash on real datasets.

Interview Questions on This Topic

  • QWhat is a tensor and how does it differ from a NumPy array?
  • QWhat is the difference between eager execution and graph execution in TensorFlow?
  • QWhat does model.compile() do, and what are the three things you always need to specify?
🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousDiffusion Models ExplainedNext →TensorFlow vs PyTorch — Which to Learn First
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged