Saturday, August 11, 2012

Direct Method for Finding Formulas


Direct method for finding formulas

Pre requisites: Calculus, College level Algebra, Linear Algebra, and Analytic Geometry, or at least some basic Trigonometry.

This next technique similarly demonstrates yet another derivation of the Simpson formula, but also goes beyond classical interpolation in that one can find through defining equations substituted into a given formula, solutions to a given function.

So this goes as follows. Let an a formula be written as







You may notice this being similar to the Simpson formula except having undetermined coefficients a, b, and c. What we do next is set up a series of linear equations by substituting three different defining equations and solving both sides with the given values, x = -1, 0, 1 as indicated above.

The defining equations are:










If we solve for the coefficients through substitution here we end up with the Simpson's formula yet again.

Demonstrating this in application.

Finding the formula for












Where the undefined coefficients are w-1, w0, and w1.

We construct the defining equations similarly as in above, but the results are different.











So If you'd forgotten your integration techniques here, I'll provide a brief refresher here in terms of techniques. In this case we'll us a well known integration by parts method.


For Equation (4) we have using substitutions 


 







For equation (5) to evaluate the left hand side of the equation we use, integration by parts with substitutions.
















 
For equation (6) we use integration by parts as we did before above,
























Substituting eq.s (8) into (4), (10) into (5), and (13) into (6), the defining equations can be written.












 
Now we have a set of linear equations that can be solved via substitution (Gaussian elimination) or other preferred method.

Using substitutions, we find














Thus the given formula looks as follows:






This will eventually lead into the another technique which is doing much the same, except using sample points as parameters.   Basically, rather then using x = -1, 0, and 1 as above, we define different sample points in determining the formula for such function.   I might also mention alongside this Gauss Quadrature Integration which is merely an abstraction of the method of determining formulas from sample points as parameters method...just applied in the nth case.


In case you are wondering where all the readings were taken from I'd cite credit here to Numerical methods for Scientists and Engineers, R.W. Hamming, Dover

Just recent past post is also a credit to topics considered here.  Much of this could be considered supplementary to his writings.

Friday, August 10, 2012

Trapezoid rule and Simpson's Formula


Okay so recently reading a bit on numerical methods relating to Function values using formulas.

As a pre requisite, I'd assume you'd have some calculus under your belt and enough algebra experience here...especially when it comes to manipulating equations, although I'll show an example derivation of the Simpson's rule.

Starting, we'd like to find the formula for integrating a function between two given endpoints a and b.

The two given endpoints are denoted (a, y(a)) and (b,y(b)), and the interpolating line is given by







Keep in mind y(x) is simply a function y that maps x to a y value...this could be written f(x) likewise, so this function operator y is not to be confused with a Cartesian position. Thus y(x) maps x to a y position but y of the expression y(x) is a functional operator in such mapping and the value y(x) is the y value that is mapped from such x value.

Okay so if you are wondering where this came from. Using the formula for a slope with 2 points we have.










Using our more familiar point slope formula which is y(x) – y1 = m * (x – x1):










means 









multiplying y(a) by (b-a)/(b-a) and separating term (-y(a))/(b-a)*(x-a) yields











distributing y(a) in the numerator to term expression (b-a) and distributing term – y(a) to term expression (x-a), and remembering that all of these terms in the numerator can be rearranged and added since we have a like denominator term expression (b-a), we have









More conveniently, I've re arranged our terms so that we can see our cancellations here.
Namely, -a*y(a) and a*y(a) are like terms and are added to simplify to zero. Thus we are left with b*y(a) – x*y(a) in the numerator, thus since y(a) is common to both terms we can factor this leaving y(a) (b-x) in the numerator which yields











or adding both terms with common denominators 










which is what we set out to derive (show).


Now getting into some calculus stuff. If you have no calculus under your belt, I can explain a basic integral formula rule used in calculation here. I'd also indicate that in polynomial expressions integrated, we can distribute the integral to each term of the polynomial. Meaning that the entire polynomial can be integrated under one integral, or we can split the integration to each term of the polynomial.

Thus for example,

a polynomial of the form

ax2 +bx + c

with integration looks as follows:







Now there is a power rule with integration which is the opposite of a derivative (hence, the other name for the integral which is anti derivative).










also remembering constants can be written outside the integral or







 
so integrating the polynomial in this case would look as follows:










 
Now let's go ahead and integrate the function y(x) above.










keep in mind b, a, y(a), y(b), b-a, all other fractional variations of the same form are constants, and thus can be treated outside the integral here. We are integrating with respect to x, so that we can write the integral in subsequent parts as mentioned above. To do this let's distribute the constant y(a) to the term (b-x) and let's distribute y(b) to the term (x-a), this yields










Now let's break this polynomial up into separate terms. Keep in mind when you have two algebraic expressions with one dividing the other, we can break the terms in the numerator into separate terms as long as we preserve the expression in the denominator.


Now let's apply rewrite the integral to each term


keep in mind some all but two of these terms are constants so we can write these outside the integral and integrate and those that aren't constants can still have the constants written outside the integral when integrating, so that we have









This is much easier to integrate! Integrating each integral we have










where evaluating further we have










we might be tempted to cancel (b-a) in at least two of the terms but we should probably recombine terms for more convenient cancellation here.

First match on either side outer most terms by multiplying by 2/2, then we'll distribute on outer most terms b to (b-a) and on the other outer most term a to (b-a)

this yields










we can see for the first two terms y(a) is a common factor, while the last two terms have y(b), and for all such terms 2(b-a) in the denominator are also common factors so we can factor and combine terms as follows










Let's simplify group terms we'll notice 









Similarly if we simplify on the numerator in the right most term, we have the same, so that we have











we see now cancellations of (b-a) in the numerator on both terms with the denominator, so that we have










 
This is better known as the Trapezoid rule.

Now on to demonstrating the Simpson's formula.

Consider a quadratic of the form 









We'll take three points (-1, y(-1)), (0,y(0)), (1,y(1)). Substituting for each value of x for these given points, we have












Substituting (2) into (3) we have









and 









substituting (4) into (1), and (2) into (1), we have








and



 






Integrate the quadratic above 











substitute (2), (5), and (6) which gives












 

Equation (7) is Simpson's Formula.

What does this say to us, well it says that the definite integral over a given interval is equal to the values of the function at the specified points above in equation (7). While this could seem more trivial to us with respect to the more easy integrable polynomial in so far as integration, this has more purposeful application with respect to more difficult functions that are neither easily interpolated or integrable.






Thursday, August 9, 2012

Linear algebra: Lower diagonalization of the 3 x 3 matrix generalized

How to solve a system of linear equations, 'diagonalizing' the matrix.

I'll focus on lower diagonalization of the matrix of 3 x 3 form in the generalized case, and leave it up to you for completing this solution, once you've gotten a sense of what is going on here.

Basically the idea here is to create a diagonal of '1' s in the a diagonal form from the top left to bottom right of the matrix.  

As a preface in introducing why 'diagonalizing' or solving the matrix would be done here, I would suggest a reading on multiplication operations on a matrices at 
http://en.wikipedia.org/wiki/Matrix_multiplication

The purpose here in short is that 'diagonalizing' is solving the matrix.  When we multiply for instant the coefficient unknowns with a given diagonalized matrix we have a system of solutions.

For instance the 'diagonalized' matrix of the form









leads to a solution







You can check the multiplication operations on the given matrix here.

But let's start from the beginning, and solve are given matrix where it is of the following form:











The idea is that we use row by row operations to get the rows in diagonal form.  Keep in mind when operations are preformed they are done so with the rules that apply equally as they would in algebra.  Thus if one row is multiplied by a given number m, both sides of the equation on such row would be multiplied by m, and every column in such row on both sides of the equation must be multiplied by m (distributive rule).  

Keep in mind we are solving ('diagonlizing') the matrix where its size is a 3 x 3 system, we could procedurally extend this method however to any matrix of n x n size where a solution exists so long as the rows are independent of one another...meaning neither a multiple of the other leading to a matrix that doesn't have enough indendent equations that should lead to a solution.  

But I'll focus abstractly at the methodology for solution here.

Our first objective in diagonalizing here is to make the position of 'a21' which is the 21 (row column) position equal 0, to do this we need a common term with respect to the position 'a11' which is at the 11 (row column) position.  

Here for multipliers, we simply want like terms to add and subtract from in the a11 and a21 position. To do this, multiply the first row by a21 and multiply the second row by a11.

the first row is 

[a11*a21     b12*a21     c13*a21]

the second row is 

[a21*a11      b22*a11      c23*a11]

Just as in the ordinary case of algebraic operations on an equation, whatever is done to the left side of our equation must be done to the rights side for each row, so

on the right side for row 1 column 1 we have

a21*d1

and on the right side for row 2 column 1 we have

a11*d2

or a matrix that looks like









Subtracting row 1 from row 2 we have









We'll redivide row 1 by a21. Then we multiply row 1 by
a31 and we'll multiply row 3 by a11

This yields a matrix of the following form








Now we subtract row 1 from row 3.



 

Unfortunately at this point simplification isn't so pretty since term matching without finding common factors implicitly inside the coefficients themselves to help with respect to term reduction and simplification here, but we are working towards a solution for 3 x 3 case which removes the necessity of solving the matrix by way of general formulation. In this case, we can take the multiply row 2 by the inverse (or reciprocal) expression (set of terms) in the row 2 column 2 position, and likewise we can do the same for row 3 column 2

so that we have 










 
Now we can subtract row 2 from row 3 which yields












Now imagine trying to solve the 4 x 4 case!
But at least here the z value is nearly solved. We could do some algebraic simplification...I'd suggest figuring out how this might occur, it might help you down the road here.

Rearranging terms we multiply row 3 by (b32*a11 - a31*b12) and (b22*a11 - a21*b12) , and we'll designate both of these expressions
respectively as rt and rs which gives


 






Now dividing row 3 by (c33*a11 - a31*c13)*rs - (c23*a11-a21*c13)*rt we have











Excepting the first term at the beginning not in proper form we've all but 
diagonalized the lower half of our matrix in this given solution.

I'd leave it as an exercise to diagonalize the triangle form of the upper half of this matrix. Basically, the idea here is to multiply one row by a given multiplier so that its factored form matches another.

In the next step we likely need to have the a multiplier of the form for the term in row 2 column 3 of this matrix. Of course, you wouldn't want to have this multiplier permanently applied to row 3 since this is already solved, but just so that you could subtract this row from row 2, so that row 2 is of the form [ 0 1 0]. We just proceed in reverse order now creating multipliers of the lower rows to eliminate the terms of the upper row...notice you can do this once the lower rows have [0 0 1] and [0 1 0] coefficients since we hadn't need worry of creating new terms where we eliminated in the case of multiplier * row subtracting from these upper rows.

Up to now when solving matrices, I'd mention this is just one method for solving a system of linear equations. Of course, likely you might have the solved formulas on the right side of the equation, programmed into your calculator's memory in this case.

I'd leave it as another exercise to diagonalize in the 2 x 2 generalized case which is a lot easier and less time consuming. This also hints why generalized solutions, outside of computer generated solutions, become magnified in terms of computational expense.

I'll probably demonstrate other methods later for solving such equation, aside from the other more well, known back substitution method that I had shown in a previous posting involving the solution of a system of equations for a given sampling polynomial.

Oblivion

 Between the fascination of an upcoming pandemic ridden college football season, Taylor Swift, and Kim Kardashian, wildfires, crazier weathe...