How is mathematics used in kitchen?

Have you ever wondered how the food that lands up on your plate is made? No, we are not talking about the journey of rice or other materials from the time they are sowed in fields till the time they are cooked into food. Instead, we are talking about how food is prepared in the kitchen, either by those cooking at home, or the chefs who prepare the food in hotels.

Culinary math is here

Culinary math is an emerging field that combines kitchen science with mathematics. At the heart of this subject is the understanding that appealing meals aren’t made by just combining ingredients in a haphazard manner. A great cook, in fact, has a lot in common with a scientist and a mathematician.

This is because what is made to look carefree and spontaneous in cookery shows is actually the result of years of hard work and practice. Cooking routines include simple to complex mathematical calculations. From counting portions to increasing the yield when required, there are numbers at play during various stages of the meal.

Computation and geometry

While addition, subtraction, multiplication, division, and fractions are involved while computing and working with the ingredients, ratios, percentages, and yields come into the picture when deciding the total amount of a food to be cooked, and then distributing them to people.

When working with spherical roti doughs and cubic paneer portions, a cook is knowingly or unknowingly dabbling with geometry. And by being familiar with units and abbreviations of measurements, and fluently converting them from one system to another, the person who is cooking is also able to borrow from cuisines from abroad.

A number of courses in culinary math has started to develop around the world, targeting students who aim to become chefs in high-end hotels. For, even though it might seem as if a famous chef is just sprinkling a bit of this, grabbing a pinch of that, and garnishing with a little bit of something else, there is a lot of maths applied to it, knowing which makes it easier.

Picture Credit : Google

Why always add, when we can also subtract?

What do you do when you find yourself in a sticky situation and you need to find a solution? Do you try to add some element to it in the hope that it would improve the overall situation? If so, you are not alone. A recent study shows that when people are looking to improve a situation, idea or object, an overwhelming majority of them try to add something to it, irrespective of whether it helps or not. This also means that people never stop to think and remove something as a solution, even if it might actually work.

In order to understand this better, think about all the adults working from home during the ongoing pandemic. You must have noticed that many of them, maybe even your parents, have complained about attending endless meetings that eat into their schedule, giving them little time to do actual work. This is a classic case of adding more and more meetings to make up for office environment, with little thought going into whether all those meetings are actually required. A simpler solution might have been to stick to existing schedules or maybe even cutting down some meetings (consider the fatigue involved in video calls as opposed to face-to-face encounters) and making communication within an organisation more efficient.

In a paper that featured in Nature, researchers from the University of Virginia looked at why people overlook subtractive ideas altogether in all kinds of context. They stated that additive ideas come more quickly and easily, whereas subtractive ones need more effort. As we are living in a fast-paced world where we are likely to implement the first ideas that occur to us, this means that additive solutions are largely accepted, without even considering subtractive ones.

Self-reinforcing effect

This further has a self-reinforcing effect. As we rely more and more on additive ideas, they become more easily accessible to us. With time, this becomes a habit, meaning our brains have a strong urge to look for additive ideas. As a result, the ability to improve the world through subtractive strategies is lost on us.

While the interesting finding of the research, which has overlaps between behavioural science and engineering, could have plenty of application across sectors, researchers believe it could be particularly useful in how we harness technology.

Less is more

The results highlight humanity’s overwhelming focus on always adding, even when the correct answer might actually to be subtract something. While this holds true for everything from people struggling with overfull schedules to institutions finding it hard to adhere to more and more rules, it also shows how we are inherently geared towards exhausting more of our planet’s resources. A minimalist approach of less is more might word wonders in a lot of situations.

Picture Credit : Google

What is knot theory?

Have you ever wondered if there is more to knots than meets your eye when tying your shoelaces? If so, here’s your answer: there’s a theory in mathematics called knot theory that delves into exactly this.

Study of closed curves

Knot theory is the study of closed curves in three dimensions and their possible deformations without one part cutting through another. Imagine a string that is interlaced and looped in any manner. If this string is then joined at the ends, then it is a knot.

The least number of crossings that occur even as a knot is moved in all possible ways denotes a knot’s complexity. This minimum number of crossings is called the order of the knot and the simplest possible knot has an order of three.

More crossings, more knots

As the order increases, the number of distinguishable knots increases manifold. While the number of knots with an order 13 is around 10,000, that number jumps to a million for an order of 16.

German mathematician Carl Friedrich Gauss took the first steps towards a mathematical theory of knots around 1800. The first attempt to systematically classify knots came in the second half of the 19th Century from Scottish mathematician-physicist Peter Guthrie Tait.

While the knot theory continued to develop for the next 100 years or so as a pure mathematical tool, it then started finding utility elsewhere as well. A breakthrough by New Zealand mathematician Vaughan Jones in 1984 allowed American mathematical physicist Edward Witten to discover a connection between knot theory and hyperbolic geometry. Jones, Witten, and Thurston all won the Fields medal, considered to be among the highest prizes for mathematics, for their contributions.

Many applications

These developments in the last few decades has meant that knot theory has found applications in biology, chemistry, mathematical physics, and even cosmology. Who knows, the possibilities with knots could possibly be endless.

 

Picture Credit : Google

In mathematics, how do you know when you have proven a theorem?



Two things: You learn that you don’t know, and you learn that deep inside, you do.



When you find, or compose, or are moonstruck by a good proof, there’s a sense of inevitability, of innate truth. You understand that the thing is true, and you understand why, and you see that it can’t be any other way. It’s like falling in love. How do you know that you’ve fallen in love? You just do.



Such proofs may be incomplete, or even downright wrong. It doesn’t matter. They have a true core, and you know it, you see it, and from there it’s only a matter of filling the gaps, cleaning things up, eliminating redundancy, finding shortcuts, rearranging arguments, organizing lemmas, generalizing, generalizing more, realizing that you’ve overgeneralized and backtracking, writing it all neatly in a paper, showing it around, and having someone show you that your brilliant proof is simply wrong.



And this is where you either realize that you’ve completely fooled yourself because you so wanted to be in love, which happens more often when you’re young and inexperienced, or you realize that it’s merely technically wrong and the core is still there, pulsing with beauty. You fix it, and everything is good with the world again.



Experience, discipline, intuition, trust and the passage of time are the things that make the latter more likely than the former. When do you know for sure? You never know for sure. I have papers I wrote in 1995 that I’m still afraid to look at because I don’t know what I’ll find there, and there’s a girl I thought I loved in 7th grade and I don’t know if that was really love or just teenage folly. You never know.



Fortunately, with mathematical proofs, you can have people peer into your soul and tell you if it’s real or not, something that’s harder to arrange with crushes. That’s the only way, of course. The best mathematicians need that process in order to know for sure. Someone mentioned Andrew Wiles; his was one of the most famous instances of public failure, but it’s far from unique. I don’t think any mathematician never had a colleague demolish their wonderful creation.



Breaking proofs into steps (called lemmas) can help immensely, because the truth of the lemmas can be verified independently. If you’re disciplined, you work hard to disprove your lemmas, to find counterexamples, to encourage others to find counterexamples, to critique your own lemmas as though they belonged to someone else. This is the very old and very useful idea of modularization: split up your Scala code, or your engineering project, or your proof, or what have you, into meaningful pieces and wrestle with each one independently. This way, even if your proof is broken, it’s perhaps just one lemma that’s broken, and if the lemma is actually true and it’s just your proof that’s wrong, you can still salvage everything by re-proving the lemma.



Or not. Maybe the lemma is harder than your theorem. Maybe it’s unprovable. Maybe it’s wrong and you’re not seeing it. Harsh mistress she is, math, and this is a long battle. It may takes weeks, or months, or years, and in the end it may not feel at all like having created a masterpiece; it may feel more like a house of sand and fog, with rooms and walls that you only vaguely believe are standing firm. So you send it for publication and await the responses.



Peer reviewers sometimes write: this step is wrong, but I don’t think it’s a big deal, you can fix it. They themselves may not even know how to fix it, but they have the experience and the intuition to know that it’s fine, and fixing it is just work. They ask you politely to do the work, and they may even accept the paper for publication pending the clean up of such details.



There are, sometimes, errors in published papers. It happens. We’re all human. Proofs that are central have been redone so many times that they are more infallible than anything of value, and we can be as certain of them as we are certain of anything. Proofs that are marginal and minor are more likely to be occasionally wrong.



So when do you know for sure? When reviewers reviewed, and time passes, and people redo your work and build on it and expand it, and over time it becomes absolutely clear that the underlying truth is unassailable. Then you know. It doesn’t happen overnight, but eventually you know.



And if you’re good, it just reaffirms what you knew, deep inside, from the very beginning.



Mathematical proofs can be formalized, using various logical frameworks (syntactic languages, axiom systems, inference rules). In that they are different from various other human endeavors.



It's important to realize, however, that actual working mathematicians almost never write down formal versions of their proofs. Open any paper in any math journal and you'll invariably find prose, a story told in some human language (usually English, sometimes French or German). There are certainly lots of math symbols and nomenclature, but the arguments are still communicated in English.



In recent decades, tremendous progress has been made on practical formalizations of real proofs. With systems like Coq, HOL, Flyspeck and others, it has become possible to write down a completely formal list of steps for proving a theorem, and have a computer verify those steps and issue a formal certificate that the proof is, indeed, correct.



The motivation for setting up those systems is, at least in part, precisely the desire to remove the human, personal aspects I described and make it unambiguously clear if a proof is correct or not.



One of the key proponents of those systems is Thomas Hales, who developed an immensely complex proof of the Kepler Conjecture and was driven by a strong desire to know whether it's correct or not. I'm fairly certain he wanted, first and foremost, to know the answer to that question himself. Hales couldn't tell, by himself, if his own proof is correct.



It is possible that in the coming decades the process will become entirely mechanized, although it won't happen overnight. As of 2016, the vast majority of proofs are still developed, communicated and verified in a very social, human way, as they were for hundreds of years, with all the hope, faith, imprecision, failure and joy that human endeavors entail.



 



Credit : Quora



Picture Credit : Google


What is the story of taxicab numbers?



Are you aware of numbers that are called as taxicab numbers? The nth taxicab number is the smallest number representable in n different ways as a sum of two positive integer cubes. These numbers are also called as the Hardy-Ramanujan number. The name taxicab numbers, in fact is derived from a story told about Indian mathematician Srinivasa Ramanujan by English mathematician GH Hardy. Here is the story, as told by Hardy I remember once going to see him (Ramanujan) when he was lying ill at Putney. I had ridden in taxi-cab No. 1729, and remarked that the number seemed to be rather a dull one, and that I hoped it was not an unfavourable omen. "No," he replied, "it is a very interesting number, it is the smallest number expressible as the sum of two [positive] cubes in two different ways."



1729, naturally, is the most popular taxicab number. 1729 can be expressed as the sum of both 12^3 and 1^3 (1728+1) and as the sum of 10 and 9 (1000+729).



While the story involving Ramanujan made these numbers famous and also gave it its name. these numbers were actually known earlier. The first mention of this concept can be traced back to the 17th Century.



2 (1^3 + 1^3) is the first taxicab number and 1729 is the second. The numbers after 1729 have been found out using computers and six taxicab numbers are known so far.



 



Picture Credit : Google