There are two versions of the Pumping Lemma. One is for context free grammars and one is for regular languages. This post is about the latter. The Pumping Lemma describes a property that all natural languages share. While it cannot be used by itself to prove that any given language is regular, it can be used to prove, often using proof by contradiction, that a language is not regular. In this sense the Pumping Lemma provides a necessary condition for a language to be regular but not a sufficient one.

In my last post, “Kleene’s Theorem,” I provided some useful background information about strings, regular languages, regular expressions, and finite automata before introducing the eponymously named theorem that has become one of the cornerstones of artificial intelligence and more specifically, natural language processing (NLP). Kleene’s Theorem tells us that regular expressions and finite state automata are one and the same when it comes to describing regular languages. In the post I will provide a proof of this groundbreaking principle.

An **arithmetic sequence** of numbers, sometimes alternatively called an **arithmetic progression**, is a sequence of numbers in which the difference between all pairs of consecutive numbers is constant. A very simple arithmetic sequence consists of the natural numbers: 1, 2, 3, 4, … where the difference between any number and the number before it is just one. 3, 7, 11, 15, 19, …. is another arithmetic sequence, but in this case the constant difference between elements is four.

A finite portion of an arithmetic sequence like 2, 3, 4 or 7, 11, 15 is called a **finite arithmetic progression**. To confuse matters, sometimes a finite arithmetic progression, like an arithmetic sequence, is also called an arithmetic progression. To be safe, when a progression is finite, I always say as much.

An **arithmetic series** is the sum of a finite arithmetic progression. An arithmetic series consisting of the first four natural numbers is 1 + 2 + 3 + 4. The sum, 10, is trivial to compute via simple addition, but for a longer series with larger numbers, having a formula to calculate the sum is indispensable.

This is going to be another one of my “selfish” posts – written primarily for me to refer back to in the future and not because I believe it will benefit anyone other than me. The idea is one that I always took for granted but had a hard time proving to myself once I decided to try.

**Theorem**: Suppose we have an M bit unsigned binary integer with value A. Consider the first (least significant) N bits with value B. Then:

Put another way, arithmetic with unsigned binary integers of a fixed length N is always performed modulo .

Continue reading »

It is no big secret that exponentiation is just multiplication in disguise. It is a short hand way to write an integer times itself multiple times and is especially space saving the larger the exponent becomes. In the same vein, a serious problem with calculating numbers raised to exponents is that they very quickly become extremely large as the exponent increases in value. The following rule provides a great computational advantage when doing modular exponentiation.

The rule for doing exponentiation in modular arithmetic is:

This states that if we take an integer , raise it to an integer power and calculate the result modulo we will get the same result as if we had taken modulo first, raise it to , and calculate that product modulo .

I must stay focused. I must stay focused. I must stay … I wonder what’s new on Facebook.

I don’t really feel like writing this post mostly because I know that it will be very similar to the other two I have already done: modular addition rule proof and modular subtraction rule proof, but my New Year’s Resolution is to follow things through to completion. Well, that would’ve been my News Years resolution if I had made one. Either way, it’s back to modular arithmetic.

The rule for doing multiplication in modular arithmetic is:

This says that if we multiply integer times integer and take the product modulo , we get the same answer as if we had first taken modulo and multiplied by modulo and taken that product modulo .

I’ve already presented and proved the rule for modular addition, so for a sense of completeness, but mostly to satisfy my OCD, now I’ll cover the rule for modular subtraction. When doing subtraction in modular arithmetic, the rule is:

If we subtract integer from integer and calculate the difference modulo , we get the same answer as if we had subtracted modulo from modulo and then calculated that difference modulo . Like the modular addition rule, this rule can also be expanded to include multiple integers. Continue reading »

Addition in modular arithmetic is much simpler than it would first appear thanks to the following rule:

This says that if we are adding two integers and and then calculating their sum modulo , the answer is the same as if we added modulo to modulo and then calculated that sum modulo . Note that this equation can be extended to include more than just two terms. Continue reading »

It may very well be the second most famous equation of all time, outshone only by that braggart Einstein’s mass–energy equivalence equation. But for those of us that aren’t theoretical physicists, the Pythagorean Theorem is likely to play a fundamental role in many of the calculations we do whether we realize it or not.

Everyone knows the equation, but before we get into it’s proof, let’s re-cap what it means. Consider the following right triangle:

This triangle has sides and hypotenuse . The Pythagorean Theorem simply tells us then that the square of the length of the hypotenuse () is equal to the square of the length of side plus the square of the length of side (). Well, duh. And after we learn it in middle school many of us take it as axiomatic, but it does have “theorem” in it’s name so I feel its worth revisiting it’s proof every once in a while to keep me on my toes.