please theach me Mathematics

minamoto

Active member
Supreme
Joined
Nov 3, 2011
Messages
22,883
Kin
26,847💸
Kumi
12,430💴
Trait Points
0⚔️
Status
i---------> .


i have bad level in mathematic..plz teach me...i love mathematics...i need it in my life...i feel like dumb if i dont learn mathematics...as long as i dont learn mathematics that means i am dumb idiot ...i hate myself for that
 

Conspirator.

Active member
Legendary
Joined
Mar 10, 2014
Messages
19,435
Kin
124💸
Kumi
6💴
Trait Points
0⚔️
Awards
I'll teach you some maths right now. You're like the exponential function. No matter how many times you try and integrate with the NB community, the result is the same-you come of as a retard each time. Lecture done.
 

P3ĮÑ

Active member
Immortal
Joined
Mar 18, 2013
Messages
46,041
Kin
375💸
Kumi
48💴
Trait Points
0⚔️
Ok boi. Ima teach you some Bsc grade mathematics. These are Computer science algorithms.

First of all, we start with, Asymptotic notation

Hide tutorial navigation
So far, the way we analyzed linear search and binary search has been by counting the maximum number of guesses we need to make. But what we really want to know is how long these algorithms take. We're interested in time, not just guesses. The running times of linear search and binary search include the times to make and check guesses, but there's more to these algorithms than making and checking guesses.
The running time of an algorithm depends on how long it takes a computer to run the lines of code of the algorithm. And that depends on the speed of the computer, the programming language, and the compiler that translates the program from the programming language into code that runs directly on the computer, as well as other factors.
Let's think about the running time of an algorithm more carefully. We use a combination of two ideas. First, we determine how long the algorithm takes, in terms of the size of its input. This idea makes intuitive sense, doesn't it? We've already seen that the maximum number of guesses in linear search and binary search increases as the length of the array increases. Or think about a GPS. If it knew about only the interstate highway system, and not about every little road, it should be able to find routes more quickly, right? So we think about the running time of the algorithm as a function of the size of its input.
The second idea is that we focus on how fast this function grows with the input size. We call that the rate of growth of the running time. To keep things manageable, we simplify the function to distill the most important part and cast aside the less important parts. For example, suppose that an algorithm, running on an input of size nn, takes 6n^2 + 100n + 3006n
​2
​​ +100n+300 machine instructions. The 6n^26n
​2
​​ term becomes larger than the remaining terms, 100 n + 300100n+300, once nn becomes large enough (20 in this case). Here's a chart showing values of 6n^26n
​2
​​ and 100n + 300100n+300 for values of nn from 0 to 100:
6n^2 vs 100n+300

We would say that the running time of this algorithm grows as n^2n
​2
​​ , dropping the coefficient 6 and the remaining terms 100n + 300100n+300. It doesn't really matter what coefficients we use; as long as the running time is an^2 + bn + can
​2
​​ +bn+c, for some numbers a > 0a>0, bb, and cc, there will always be a value of nn for which an^2an
​2
​​ is greater than bn + cbn+c, and this difference increases as nn increases. For example, here's a chart showing values of 0.6n^20.6n
​2
​​ and 1000n + 30001000n+3000, so that we've reduced the coefficient of n^2n
​2
​​ by a factor of 10 and increased the other two constants by a factor of 10:
6n^2 vs 100n+300

The value of nn at which 0.6n^20.6n
​2
​​ becomes greater than 1000n + 30001000n+3000 has increased, but there will always be such a crossover point, no matter what the constants.
By dropping the less significant terms and the constant coefficients, we can focus on the important part of an algorithm's running time—its rate of growth—without getting mired in details that complicate our understanding. When we drop the constant coefficients and the less significant terms, we use asymptotic notation. We'll see three forms of it: big-\ThetaΘ notation, big-O notation, and big-\OmegaΩ notation.
 

minamoto

Active member
Supreme
Joined
Nov 3, 2011
Messages
22,883
Kin
26,847💸
Kumi
12,430💴
Trait Points
0⚔️
Status
Ok boi. Ima teach you some Bsc grade mathematics. These are Computer science algorithms.

First of all, we start with, Asymptotic notation

Hide tutorial navigation
So far, the way we analyzed linear search and binary search has been by counting the maximum number of guesses we need to make. But what we really want to know is how long these algorithms take. We're interested in time, not just guesses. The running times of linear search and binary search include the times to make and check guesses, but there's more to these algorithms than making and checking guesses.
The running time of an algorithm depends on how long it takes a computer to run the lines of code of the algorithm. And that depends on the speed of the computer, the programming language, and the compiler that translates the program from the programming language into code that runs directly on the computer, as well as other factors.
Let's think about the running time of an algorithm more carefully. We use a combination of two ideas. First, we determine how long the algorithm takes, in terms of the size of its input. This idea makes intuitive sense, doesn't it? We've already seen that the maximum number of guesses in linear search and binary search increases as the length of the array increases. Or think about a GPS. If it knew about only the interstate highway system, and not about every little road, it should be able to find routes more quickly, right? So we think about the running time of the algorithm as a function of the size of its input.
The second idea is that we focus on how fast this function grows with the input size. We call that the rate of growth of the running time. To keep things manageable, we simplify the function to distill the most important part and cast aside the less important parts. For example, suppose that an algorithm, running on an input of size nn, takes 6n^2 + 100n + 3006n
​2
​​ +100n+300 machine instructions. The 6n^26n
​2
​​ term becomes larger than the remaining terms, 100 n + 300100n+300, once nn becomes large enough (20 in this case). Here's a chart showing values of 6n^26n
​2
​​ and 100n + 300100n+300 for values of nn from 0 to 100:
6n^2 vs 100n+300

We would say that the running time of this algorithm grows as n^2n
​2
​​ , dropping the coefficient 6 and the remaining terms 100n + 300100n+300. It doesn't really matter what coefficients we use; as long as the running time is an^2 + bn + can
​2
​​ +bn+c, for some numbers a > 0a>0, bb, and cc, there will always be a value of nn for which an^2an
​2
​​ is greater than bn + cbn+c, and this difference increases as nn increases. For example, here's a chart showing values of 0.6n^20.6n
​2
​​ and 1000n + 30001000n+3000, so that we've reduced the coefficient of n^2n
​2
​​ by a factor of 10 and increased the other two constants by a factor of 10:
6n^2 vs 100n+300

The value of nn at which 0.6n^20.6n
​2
​​ becomes greater than 1000n + 30001000n+3000 has increased, but there will always be such a crossover point, no matter what the constants.
By dropping the less significant terms and the constant coefficients, we can focus on the important part of an algorithm's running time—its rate of growth—without getting mired in details that complicate our understanding. When we drop the constant coefficients and the less significant terms, we use asymptotic notation. We'll see three forms of it: big-\ThetaΘ notation, big-O notation, and big-\OmegaΩ notation.
no not this 1 i mean simple mathematics !!
 
Top