Okay, so its probably the sort of topic that fewer encounter as the approach the subject matter of computing through formulation quantities, but this sparks a bit of interest on my part. Only because the subject matter of error in problem solving becomes more significant on the basis of the many ways in which one could mistakenly tackle solving a problem. The sorts of problems arising here could be potentially many, of course, it should seem when and where algebraic s are concerned any number of permutation of errors could increase in order with the inclusion of increasing terms and loss of symmetry in their inter relations (i.e., where we visually see less geometric or visual order in their relations...). For instance, while the memory of higher order polynomials may be remembered by the increasing and orderly writing of terms by increasing or conversely by decreasing degree. On the other hand, including a higher degree of disorder or disarray in the inter related term process identification (e.g., where each terms identifying characteristics become increasingly large and seemingly unique) makes for greater difficulty in analysis of simplifications. More factors/terms in a given solution will likely increase the probability of error, algebraic mishandling of one sort or another. Thus we might inevitably find as we've gone into higher levels of education, especially in something like math, developing methods and aids as we are resolving algebraic problems. Some of the basis of these should include something like:
1. Simplifications to be made (e.g. cancellations of terms) asap, whenever and where ever possible. Obviously the fewer terms required in accounting, the less likely that one should carry an added probability of mistakenly switching signs carrying over terms from one derivation to the next.
2. Learn to work with the GCF (greatest common factor) in algebra for rational expressions. Especially in the way of factoring these polys and than from factors finding the needed GCF across the range. Obviously, the other point of the GCF is being able to invert the rational expression if needed when say transposing from one side to another on a given equation. Of course, another recent old Calculus inspired technique that I had neglected to remember for some time were partial fractions whereby decomposing a rational expression into lower order rational expression terms all originating from a given higher order rational expression. Hopefully you should know how to compose rational expressions, and decompose them. While it seems a bit odd picking on this bit of algebra, I think its also one of the more overlooked and maybe skipped parts of algebra in terms of derivation handling.
3. Large jumps in complexity from one derivation point to the next, could mean a potential re assessment in one's work. This seems to be a point where you were working with something like factored expressions, and you decided to work with expansions and some other aids only to find yourself with a much larger set of obviously non inter related terms... was an expansion at this point really necessary or needed? If you aren't getting potentially any positive cancellation of terms feedback, maybe re grouping back to a pre expansions derivation point would be better, so as to avoid this potential step for later if something else might be done intermediately to save in the simplifying process later.
4. While there is not always soundness in this point, if you are reaching a point such that the complexity of work load is unsoundly out of line relative to the workloads typified by solution processes often found by problems of a similar type, you at least have a tangible feeling that potentially, you might have made the problem harder than it need be. Again, a regroup?
The list on these points, I am sure could increase, and here I've only dealt in a more general way on the matter of algebra alone.
For the matter of science and especially science and engineering, and something at times that I've admittedly skipped through knowing well the potential consequences, is the matter of tracking units and knowing how these units are expressed and converted from one system to the next. Often times, if I hadn't been working with pen/pencil and paper, and likely an electronic source (with copy and paste at my disposal handy for copying a selection of equation text to start the next derivation step which at least removes as in the pen and paper case the possible human facsimile transmission error point), I might have on occasion made the mistake of neglecting unit conversions, and unit conversions one should imagine outside of algebraic mistakes or likely another big source of error. The lesser known error, could be the unit conversion method, since one should imagine that often unit conversions are one's entailing a factor re scaling (that is multiplying by one conversion that leads to another), there are some conversions, however, that are translated in nature meaning where a conversion factor is instead added or subtracted from a given unit to convert from one unit to the next...some temperature conversions, for instance,are translated , and I believe and others include combinations of translation and scale conversion factors.
In a bit of summary to the given point above, then I'd say probably the goal of organizing oneself around error checking, involves, maybe at a basis something like a routine or checklist, I could think of at this point, surrounding the nature of what should and shouldn't be checked first versus what should. Likely it would seem if the simpler mistakes were the most likely, we'd have error checking routines structured around say checking for conversions, and then checking for algebraic mistakes, and then going up the list...you could try alternate solution strategies/methods likewise to see if you are producing the same result at least then you know potentially its not as likely that you are mistakenly doing something wrong in the method, and especially if you have verified this alongside other text book examples with shown solutions. I then find, as an additional resort, re reading the problem making sure that I have absorbed the details given at an outset. Here I don't know how many times, something of detail might be overlooked in the problem solving process. At least in mathematics, I've found at times, that assumptions tend as a rule not be extraneous while this hadn't always precluded a given logic for any and every such problem. Has an assumption been left out which leads to the construction of something in the model of the problem solving process which could aid in its given simplification?!
Then beyond the equations there is the model process. Obviously when given quantities of things and asked to find other things, it would seem a reading in a given text leads us to what appropriate relation is bearing for the things that we have are asked to produce...here being left to scour through equation sets, especially when it seems less likely that we can derive a given equation on our own, I am left pondering over this best fit model to problem solving, its more likely the sort of model to problem solving, it should seem for a final exam, or at least when we have the volumes written down or versed in mind and we are best fitting assumptions/known s and unknowns alike to a given set of equation(s), insights come from a memory of repeated exercise maybe, or the construction of problem models visually speaking...even breaking these down into separate components, and then forming the basis of a series of equations. Maybe we are led to something like use conservation of energy, conservation of angular momentum, use conservation of mass, use mass flow volume analysis, Newton's laws and so/forth and all of these in the derivation process which lead us on the path to a given solution. On the other hand, embedded errors can be attributed to missing use of model which might aid simplification, and/or missing point of conservation, and you hadn't need think of conservation in terms of the traditional big ones like energy, mass, and so forth.
Anyways, its sort of a recent inspirational topic in its own right. Where despite any given flow in solution handling, you find yourself at a given point of working absurdly once in a blue moon longer than expected, on something that catches one off guard. In terms of the ambitious engineering that builds layers of complexity into a given systems overall design, it maybe often the simplest of little embedded errors that lead to its own set of consequences, or neglect in models that have implications down the road.
1. Simplifications to be made (e.g. cancellations of terms) asap, whenever and where ever possible. Obviously the fewer terms required in accounting, the less likely that one should carry an added probability of mistakenly switching signs carrying over terms from one derivation to the next.
2. Learn to work with the GCF (greatest common factor) in algebra for rational expressions. Especially in the way of factoring these polys and than from factors finding the needed GCF across the range. Obviously, the other point of the GCF is being able to invert the rational expression if needed when say transposing from one side to another on a given equation. Of course, another recent old Calculus inspired technique that I had neglected to remember for some time were partial fractions whereby decomposing a rational expression into lower order rational expression terms all originating from a given higher order rational expression. Hopefully you should know how to compose rational expressions, and decompose them. While it seems a bit odd picking on this bit of algebra, I think its also one of the more overlooked and maybe skipped parts of algebra in terms of derivation handling.
3. Large jumps in complexity from one derivation point to the next, could mean a potential re assessment in one's work. This seems to be a point where you were working with something like factored expressions, and you decided to work with expansions and some other aids only to find yourself with a much larger set of obviously non inter related terms... was an expansion at this point really necessary or needed? If you aren't getting potentially any positive cancellation of terms feedback, maybe re grouping back to a pre expansions derivation point would be better, so as to avoid this potential step for later if something else might be done intermediately to save in the simplifying process later.
4. While there is not always soundness in this point, if you are reaching a point such that the complexity of work load is unsoundly out of line relative to the workloads typified by solution processes often found by problems of a similar type, you at least have a tangible feeling that potentially, you might have made the problem harder than it need be. Again, a regroup?
The list on these points, I am sure could increase, and here I've only dealt in a more general way on the matter of algebra alone.
For the matter of science and especially science and engineering, and something at times that I've admittedly skipped through knowing well the potential consequences, is the matter of tracking units and knowing how these units are expressed and converted from one system to the next. Often times, if I hadn't been working with pen/pencil and paper, and likely an electronic source (with copy and paste at my disposal handy for copying a selection of equation text to start the next derivation step which at least removes as in the pen and paper case the possible human facsimile transmission error point), I might have on occasion made the mistake of neglecting unit conversions, and unit conversions one should imagine outside of algebraic mistakes or likely another big source of error. The lesser known error, could be the unit conversion method, since one should imagine that often unit conversions are one's entailing a factor re scaling (that is multiplying by one conversion that leads to another), there are some conversions, however, that are translated in nature meaning where a conversion factor is instead added or subtracted from a given unit to convert from one unit to the next...some temperature conversions, for instance,are translated , and I believe and others include combinations of translation and scale conversion factors.
In a bit of summary to the given point above, then I'd say probably the goal of organizing oneself around error checking, involves, maybe at a basis something like a routine or checklist, I could think of at this point, surrounding the nature of what should and shouldn't be checked first versus what should. Likely it would seem if the simpler mistakes were the most likely, we'd have error checking routines structured around say checking for conversions, and then checking for algebraic mistakes, and then going up the list...you could try alternate solution strategies/methods likewise to see if you are producing the same result at least then you know potentially its not as likely that you are mistakenly doing something wrong in the method, and especially if you have verified this alongside other text book examples with shown solutions. I then find, as an additional resort, re reading the problem making sure that I have absorbed the details given at an outset. Here I don't know how many times, something of detail might be overlooked in the problem solving process. At least in mathematics, I've found at times, that assumptions tend as a rule not be extraneous while this hadn't always precluded a given logic for any and every such problem. Has an assumption been left out which leads to the construction of something in the model of the problem solving process which could aid in its given simplification?!
Then beyond the equations there is the model process. Obviously when given quantities of things and asked to find other things, it would seem a reading in a given text leads us to what appropriate relation is bearing for the things that we have are asked to produce...here being left to scour through equation sets, especially when it seems less likely that we can derive a given equation on our own, I am left pondering over this best fit model to problem solving, its more likely the sort of model to problem solving, it should seem for a final exam, or at least when we have the volumes written down or versed in mind and we are best fitting assumptions/known s and unknowns alike to a given set of equation(s), insights come from a memory of repeated exercise maybe, or the construction of problem models visually speaking...even breaking these down into separate components, and then forming the basis of a series of equations. Maybe we are led to something like use conservation of energy, conservation of angular momentum, use conservation of mass, use mass flow volume analysis, Newton's laws and so/forth and all of these in the derivation process which lead us on the path to a given solution. On the other hand, embedded errors can be attributed to missing use of model which might aid simplification, and/or missing point of conservation, and you hadn't need think of conservation in terms of the traditional big ones like energy, mass, and so forth.
Anyways, its sort of a recent inspirational topic in its own right. Where despite any given flow in solution handling, you find yourself at a given point of working absurdly once in a blue moon longer than expected, on something that catches one off guard. In terms of the ambitious engineering that builds layers of complexity into a given systems overall design, it maybe often the simplest of little embedded errors that lead to its own set of consequences, or neglect in models that have implications down the road.
No comments:
Post a Comment