What is the main concept of Rhind papyrus in the Egyptian mathematical system?

The Rhind Mathematical Papyrus is the best example of Egyptian mathematics. Dating back to 1650 BC, it was copied by an Egyptian scribe named Ahmes from another document written around 2000 BC. It is named after Alexander Rhind, a Scottish antiquarian, who purchased the papyrus in 1858 in Luxor, Egypt. The papyrus is 33 cm tall and 5 m long and contains 87 mathematical problems as well as the earliest reference to Pi.

The Pharaoh’s surveyors used measurements based on body parts (a palm was the width of the hand, a cubit the measurement from elbow to fingertips) to measure land and buildings very early in Egyptian history, and a decimal numeric system was developed based on our ten fingers. The oldest mathematical text from ancient Egypt discovered so far, though, is the Moscow Papyrus, which dates from the Egyptian Middle Kingdom around 2000 – 1800 BCE.

It is thought that the Egyptians introduced the earliest fully-developed base 10 numeration system at least as early as 2700 BCE (and probably much early). Written numbers used a stroke for units, a heel-bone symbol for tens, a coil of rope for hundreds and a lotus plant for thousands, as well as other hieroglyphic symbols for higher powers of ten up to a million. However, there was no concept of place value, so larger numbers were rather unwieldy (although a million required just one character, a million minus one required fifty-four characters).

The Rhind Papyrus, dating from around 1650 BCE, is a kind of instruction manual in arithmetic and geometry, and it gives us explicit demonstrations of how multiplication and division was carried out at that time. It also contains evidence of other mathematical knowledge, including unit fractions, composite and prime numbers, arithmetic, geometric and harmonic means, and how to solve first order linear equations as well as arithmetic and geometric series. The Berlin Papyrus, which dates from around 1300 BCE, shows that ancient Egyptians could solve second-order algebraic (quadratic) equations.

Practical problems of trade and the market led to the development of a notation for fractions. The papyri which have come down to us demonstrate the use of unit fractions based on the symbol of the Eye of Horus, where each part of the eye represented a different fraction, each half of the previous one (i.e. half, quarter, eighth, sixteenth, thirty-second, sixty-fourth), so that the total was one-sixty-fourth short of a whole, the first known example of a geometric series. Unit fractions could also be used for simple division sums.

The Egyptians approximated the area of a circle by using shapes whose area they did know. They observed that the area of a circle of diameter 9 units, for example, was very close to the area of a square with sides of 8 units, so that the area of circles of other diameters could be obtained by multiplying the diameter by 8?9 and then squaring it. This gives an effective approximation of ? accurate to within less than one percent.

The pyramids themselves are another indication of the sophistication of Egyptian mathematics. Setting aside claims that the pyramids are first known structures to observe the golden ratio of 1 : 1.618 (which may have occurred for purely aesthetic, and not mathematical, reasons), there is certainly evidence that they knew the formula for the volume of a pyramid – 1?3 times the height times the length times the width – as well as of a truncated or clipped pyramid.

They were also aware, long before Pythagoras, of the rule that a triangle with sides 3, 4 and 5 units yields a perfect right angle, and Egyptian builders used ropes knotted at intervals of 3, 4 and 5 units in order to ensure exact right angles for their stonework (in fact, the 3-4-5 right triangle is often called “Egyptian”).

Credit : Story of Mathematics 

Picture Credit : Google

How is mathematics used in kitchen?

Have you ever wondered how the food that lands up on your plate is made? No, we are not talking about the journey of rice or other materials from the time they are sowed in fields till the time they are cooked into food. Instead, we are talking about how food is prepared in the kitchen, either by those cooking at home, or the chefs who prepare the food in hotels.

Culinary math is here

Culinary math is an emerging field that combines kitchen science with mathematics. At the heart of this subject is the understanding that appealing meals aren’t made by just combining ingredients in a haphazard manner. A great cook, in fact, has a lot in common with a scientist and a mathematician.

This is because what is made to look carefree and spontaneous in cookery shows is actually the result of years of hard work and practice. Cooking routines include simple to complex mathematical calculations. From counting portions to increasing the yield when required, there are numbers at play during various stages of the meal.

Computation and geometry

While addition, subtraction, multiplication, division, and fractions are involved while computing and working with the ingredients, ratios, percentages, and yields come into the picture when deciding the total amount of a food to be cooked, and then distributing them to people.

When working with spherical roti doughs and cubic paneer portions, a cook is knowingly or unknowingly dabbling with geometry. And by being familiar with units and abbreviations of measurements, and fluently converting them from one system to another, the person who is cooking is also able to borrow from cuisines from abroad.

A number of courses in culinary math has started to develop around the world, targeting students who aim to become chefs in high-end hotels. For, even though it might seem as if a famous chef is just sprinkling a bit of this, grabbing a pinch of that, and garnishing with a little bit of something else, there is a lot of maths applied to it, knowing which makes it easier.

Picture Credit : Google

Why always add, when we can also subtract?

What do you do when you find yourself in a sticky situation and you need to find a solution? Do you try to add some element to it in the hope that it would improve the overall situation? If so, you are not alone. A recent study shows that when people are looking to improve a situation, idea or object, an overwhelming majority of them try to add something to it, irrespective of whether it helps or not. This also means that people never stop to think and remove something as a solution, even if it might actually work.

In order to understand this better, think about all the adults working from home during the ongoing pandemic. You must have noticed that many of them, maybe even your parents, have complained about attending endless meetings that eat into their schedule, giving them little time to do actual work. This is a classic case of adding more and more meetings to make up for office environment, with little thought going into whether all those meetings are actually required. A simpler solution might have been to stick to existing schedules or maybe even cutting down some meetings (consider the fatigue involved in video calls as opposed to face-to-face encounters) and making communication within an organisation more efficient.

In a paper that featured in Nature, researchers from the University of Virginia looked at why people overlook subtractive ideas altogether in all kinds of context. They stated that additive ideas come more quickly and easily, whereas subtractive ones need more effort. As we are living in a fast-paced world where we are likely to implement the first ideas that occur to us, this means that additive solutions are largely accepted, without even considering subtractive ones.

Self-reinforcing effect

This further has a self-reinforcing effect. As we rely more and more on additive ideas, they become more easily accessible to us. With time, this becomes a habit, meaning our brains have a strong urge to look for additive ideas. As a result, the ability to improve the world through subtractive strategies is lost on us.

While the interesting finding of the research, which has overlaps between behavioural science and engineering, could have plenty of application across sectors, researchers believe it could be particularly useful in how we harness technology.

Less is more

The results highlight humanity’s overwhelming focus on always adding, even when the correct answer might actually to be subtract something. While this holds true for everything from people struggling with overfull schedules to institutions finding it hard to adhere to more and more rules, it also shows how we are inherently geared towards exhausting more of our planet’s resources. A minimalist approach of less is more might word wonders in a lot of situations.

Picture Credit : Google

What is knot theory?

Have you ever wondered if there is more to knots than meets your eye when tying your shoelaces? If so, here’s your answer: there’s a theory in mathematics called knot theory that delves into exactly this.

Study of closed curves

Knot theory is the study of closed curves in three dimensions and their possible deformations without one part cutting through another. Imagine a string that is interlaced and looped in any manner. If this string is then joined at the ends, then it is a knot.

The least number of crossings that occur even as a knot is moved in all possible ways denotes a knot’s complexity. This minimum number of crossings is called the order of the knot and the simplest possible knot has an order of three.

More crossings, more knots

As the order increases, the number of distinguishable knots increases manifold. While the number of knots with an order 13 is around 10,000, that number jumps to a million for an order of 16.

German mathematician Carl Friedrich Gauss took the first steps towards a mathematical theory of knots around 1800. The first attempt to systematically classify knots came in the second half of the 19th Century from Scottish mathematician-physicist Peter Guthrie Tait.

While the knot theory continued to develop for the next 100 years or so as a pure mathematical tool, it then started finding utility elsewhere as well. A breakthrough by New Zealand mathematician Vaughan Jones in 1984 allowed American mathematical physicist Edward Witten to discover a connection between knot theory and hyperbolic geometry. Jones, Witten, and Thurston all won the Fields medal, considered to be among the highest prizes for mathematics, for their contributions.

Many applications

These developments in the last few decades has meant that knot theory has found applications in biology, chemistry, mathematical physics, and even cosmology. Who knows, the possibilities with knots could possibly be endless.


Picture Credit : Google

In mathematics, how do you know when you have proven a theorem?

Two things: You learn that you don’t know, and you learn that deep inside, you do.

When you find, or compose, or are moonstruck by a good proof, there’s a sense of inevitability, of innate truth. You understand that the thing is true, and you understand why, and you see that it can’t be any other way. It’s like falling in love. How do you know that you’ve fallen in love? You just do.

Such proofs may be incomplete, or even downright wrong. It doesn’t matter. They have a true core, and you know it, you see it, and from there it’s only a matter of filling the gaps, cleaning things up, eliminating redundancy, finding shortcuts, rearranging arguments, organizing lemmas, generalizing, generalizing more, realizing that you’ve overgeneralized and backtracking, writing it all neatly in a paper, showing it around, and having someone show you that your brilliant proof is simply wrong.

And this is where you either realize that you’ve completely fooled yourself because you so wanted to be in love, which happens more often when you’re young and inexperienced, or you realize that it’s merely technically wrong and the core is still there, pulsing with beauty. You fix it, and everything is good with the world again.

Experience, discipline, intuition, trust and the passage of time are the things that make the latter more likely than the former. When do you know for sure? You never know for sure. I have papers I wrote in 1995 that I’m still afraid to look at because I don’t know what I’ll find there, and there’s a girl I thought I loved in 7th grade and I don’t know if that was really love or just teenage folly. You never know.

Fortunately, with mathematical proofs, you can have people peer into your soul and tell you if it’s real or not, something that’s harder to arrange with crushes. That’s the only way, of course. The best mathematicians need that process in order to know for sure. Someone mentioned Andrew Wiles; his was one of the most famous instances of public failure, but it’s far from unique. I don’t think any mathematician never had a colleague demolish their wonderful creation.

Breaking proofs into steps (called lemmas) can help immensely, because the truth of the lemmas can be verified independently. If you’re disciplined, you work hard to disprove your lemmas, to find counterexamples, to encourage others to find counterexamples, to critique your own lemmas as though they belonged to someone else. This is the very old and very useful idea of modularization: split up your Scala code, or your engineering project, or your proof, or what have you, into meaningful pieces and wrestle with each one independently. This way, even if your proof is broken, it’s perhaps just one lemma that’s broken, and if the lemma is actually true and it’s just your proof that’s wrong, you can still salvage everything by re-proving the lemma.

Or not. Maybe the lemma is harder than your theorem. Maybe it’s unprovable. Maybe it’s wrong and you’re not seeing it. Harsh mistress she is, math, and this is a long battle. It may takes weeks, or months, or years, and in the end it may not feel at all like having created a masterpiece; it may feel more like a house of sand and fog, with rooms and walls that you only vaguely believe are standing firm. So you send it for publication and await the responses.

Peer reviewers sometimes write: this step is wrong, but I don’t think it’s a big deal, you can fix it. They themselves may not even know how to fix it, but they have the experience and the intuition to know that it’s fine, and fixing it is just work. They ask you politely to do the work, and they may even accept the paper for publication pending the clean up of such details.

There are, sometimes, errors in published papers. It happens. We’re all human. Proofs that are central have been redone so many times that they are more infallible than anything of value, and we can be as certain of them as we are certain of anything. Proofs that are marginal and minor are more likely to be occasionally wrong.

So when do you know for sure? When reviewers reviewed, and time passes, and people redo your work and build on it and expand it, and over time it becomes absolutely clear that the underlying truth is unassailable. Then you know. It doesn’t happen overnight, but eventually you know.

And if you’re good, it just reaffirms what you knew, deep inside, from the very beginning.

Mathematical proofs can be formalized, using various logical frameworks (syntactic languages, axiom systems, inference rules). In that they are different from various other human endeavors.

It's important to realize, however, that actual working mathematicians almost never write down formal versions of their proofs. Open any paper in any math journal and you'll invariably find prose, a story told in some human language (usually English, sometimes French or German). There are certainly lots of math symbols and nomenclature, but the arguments are still communicated in English.

In recent decades, tremendous progress has been made on practical formalizations of real proofs. With systems like Coq, HOL, Flyspeck and others, it has become possible to write down a completely formal list of steps for proving a theorem, and have a computer verify those steps and issue a formal certificate that the proof is, indeed, correct.

The motivation for setting up those systems is, at least in part, precisely the desire to remove the human, personal aspects I described and make it unambiguously clear if a proof is correct or not.

One of the key proponents of those systems is Thomas Hales, who developed an immensely complex proof of the Kepler Conjecture and was driven by a strong desire to know whether it's correct or not. I'm fairly certain he wanted, first and foremost, to know the answer to that question himself. Hales couldn't tell, by himself, if his own proof is correct.

It is possible that in the coming decades the process will become entirely mechanized, although it won't happen overnight. As of 2016, the vast majority of proofs are still developed, communicated and verified in a very social, human way, as they were for hundreds of years, with all the hope, faith, imprecision, failure and joy that human endeavors entail.


Credit : Quora

Picture Credit : Google