I am not certain if it is often given as in the case of modeling a particular problem, but also given the complexity of such a problem in terms of conceivable outcomes. I found myself considering much the heuristic aspect of implementing such model. Experiment trials yield error which lead to refinements of a given model solution until one hopefully has achieved something with little to no error when the model solution is not completely known at the outset.
Then thinking of the human aspect of judging when and where error were concerned in my case, came with presentation of visual errors in such data. For a machine this would amount to looking in terms of numerical analysis for jump discontinuities in a given data set, indicating that model deficits were likely which in turn would lead to a new hypothesis on the cause of such error. Probably the bigger leap in terms of intelligently learning from errors. In terms of the model at hand, a thought of rasterization method, it would turn out that this blocks of data were not properly being assigned at certain boundary conditions where re iterations in a given process should occur. Thus the problem at hand should logically revolve around conditional structures of re iterating such process and where these were incorrect. This would in turn lead to a refinement of conditional structures surrounding the decision surrounding both the choice of boundary conditions and added to how closely one need approximate to such boundary condition. In the absence of being able to delineate a separability between ambiguous circumstances that would arise for data points being too close, for instance, to a given boundary condition relative to boundary conditions not present.
A more in depth discussion of the problem:
The problem that I had concerning the approach of determining Voronoi pixel data should go as follows:
1. Use the cell site of the graph as a seed for distance approximating all nearby points inside such point.
2. Determine boundary ymin ymax boundary conditions on the cell site to a given neighbor edge where a the cell site were between two such edge start end positions. There would ideally be two edges, but this assumption does not always hold.
3. Use the same process on a given x Axis similarly not always two edges are sufficient for describing boundary conditions.
4. When hitting boundary condition of the cells local ymax/ymin through step iterating all points in between for voronoi cell set inclusion, one should compare such neighboring position with respect to the cells absolute y max/ymin position. If not within a nearest vicinity, reiterate processes in 2 and 3 choosing either the x of ymax/ymin or a sufficient near to boundary condition point within proximity to x of ymax/ymin.
As it turns out one big potential root error was the assumption that boundary conditions could always be described in such modeling process which would lead to sufficiency in generating the the conditional structures in reiterating the rasterization method. I could have implemented additional methods and means for disambiguation of the data set. However, I instead in such solution choose to avoid iterating over boundary condition points (as it turns out in the problem) edge related data of a Voronoi graph (since this would lead to the condition of false assumption that a boundary condition were actually given by the condition that no boundary condition or edge should exist in describing such boundary condition), and then chose a second pass method (neighbor approximation) since a point on a voronoi graph edge is equidistant to all nearby neighboring nodes, one could use the logic that any neighboring point (+/- 1) increment would adequately define a nearest cell, and thus I could adequately determine these boundary condition points since the first pass method hadn't improperly defined boundary points. The remaining collection of undetermined pixels on the graph were left overs between two sets of data points which were the skipped edge boundaries of the Voronoi cell graph. Generally at all cost, I wanted to avoid also the logical step of iterating blindly through cells to find any nearest neighbor, which is expensive in the brute force methods of Voronoi graph generation (and quite slow). An alternate pass method in more blind approach might have taken use of a grid addressing coordinate system for refining possible candidate cells sites) here any non rendered pixels might be rendered by choosing cell sites in nearest to such grid square address. Although this is completely not used edge rendering in fortune based graph generation.
Some added thoughts on refining the learning process. As it turns out in heuristic learning we may also be inclined if there is no solution to 'give up' before we have discovered or reasoned well enough that no solution exists. I recall an infamous and once popular puzzle that were created by a man that sought to offer a prize for a solution to a game which involved using manipulating squares for a permutation type puzzle. He offered a generous prize to the first person to solve the puzzle, while he secretly knew the puzzle had no solution. The prize in turn enticed consumers to buy the puzzle, and he would rest assured that the puzzle mathematically had no solution thus securing more his wealth. In this way, like it is that we employ a fuzzy logic to heuristic learning and solutions. When solutions are not completely right in the context of solving all unknowns we may find that a fuzzy sufficiency is met...usually this fuzziness amounts to how well over an iterated series is a solution and how much within acceptable error tolerances is a given solution, or when no solution is found period (as in the case of the permutation puzzle), how much time is spent pursuing a solution that is likely not to be found?!
I'd mention that deficits were found in the original process. As to whether these were originally owing to neither enough control conditions for all manner of circumstances that could occur in the graph generation process, or that something might have for instance, arisen in the original code implementing graph generation would lead to the creation of an added method which should hopefully conclude the all other aspects of tasks needed to solve a given problem. In this way also a heuristic fuzzy solution appears (the 2nd pass method) with respect to the time spent laboring over potential logical deficits on the first pass method and generally not seeing immediate reason given by the drawn methods in such pass. While the first pass method itself might have guaranteed that a large proportion of data were statistically accounted for in general, this were not 100% complete. This sort of solution work, I imagine could be very real world oriented especially with respect to large scale complex designs, or solving problems that need, for instance, redundant reapplication of work when methods are not as of yet completely sufficient. Consider also industrial quality control processes which aim to add extra layers in a given plant processing context for example.
Then thinking of the human aspect of judging when and where error were concerned in my case, came with presentation of visual errors in such data. For a machine this would amount to looking in terms of numerical analysis for jump discontinuities in a given data set, indicating that model deficits were likely which in turn would lead to a new hypothesis on the cause of such error. Probably the bigger leap in terms of intelligently learning from errors. In terms of the model at hand, a thought of rasterization method, it would turn out that this blocks of data were not properly being assigned at certain boundary conditions where re iterations in a given process should occur. Thus the problem at hand should logically revolve around conditional structures of re iterating such process and where these were incorrect. This would in turn lead to a refinement of conditional structures surrounding the decision surrounding both the choice of boundary conditions and added to how closely one need approximate to such boundary condition. In the absence of being able to delineate a separability between ambiguous circumstances that would arise for data points being too close, for instance, to a given boundary condition relative to boundary conditions not present.
A more in depth discussion of the problem:
The problem that I had concerning the approach of determining Voronoi pixel data should go as follows:
1. Use the cell site of the graph as a seed for distance approximating all nearby points inside such point.
2. Determine boundary ymin ymax boundary conditions on the cell site to a given neighbor edge where a the cell site were between two such edge start end positions. There would ideally be two edges, but this assumption does not always hold.
3. Use the same process on a given x Axis similarly not always two edges are sufficient for describing boundary conditions.
4. When hitting boundary condition of the cells local ymax/ymin through step iterating all points in between for voronoi cell set inclusion, one should compare such neighboring position with respect to the cells absolute y max/ymin position. If not within a nearest vicinity, reiterate processes in 2 and 3 choosing either the x of ymax/ymin or a sufficient near to boundary condition point within proximity to x of ymax/ymin.
As it turns out one big potential root error was the assumption that boundary conditions could always be described in such modeling process which would lead to sufficiency in generating the the conditional structures in reiterating the rasterization method. I could have implemented additional methods and means for disambiguation of the data set. However, I instead in such solution choose to avoid iterating over boundary condition points (as it turns out in the problem) edge related data of a Voronoi graph (since this would lead to the condition of false assumption that a boundary condition were actually given by the condition that no boundary condition or edge should exist in describing such boundary condition), and then chose a second pass method (neighbor approximation) since a point on a voronoi graph edge is equidistant to all nearby neighboring nodes, one could use the logic that any neighboring point (+/- 1) increment would adequately define a nearest cell, and thus I could adequately determine these boundary condition points since the first pass method hadn't improperly defined boundary points. The remaining collection of undetermined pixels on the graph were left overs between two sets of data points which were the skipped edge boundaries of the Voronoi cell graph. Generally at all cost, I wanted to avoid also the logical step of iterating blindly through cells to find any nearest neighbor, which is expensive in the brute force methods of Voronoi graph generation (and quite slow). An alternate pass method in more blind approach might have taken use of a grid addressing coordinate system for refining possible candidate cells sites) here any non rendered pixels might be rendered by choosing cell sites in nearest to such grid square address. Although this is completely not used edge rendering in fortune based graph generation.
Some added thoughts on refining the learning process. As it turns out in heuristic learning we may also be inclined if there is no solution to 'give up' before we have discovered or reasoned well enough that no solution exists. I recall an infamous and once popular puzzle that were created by a man that sought to offer a prize for a solution to a game which involved using manipulating squares for a permutation type puzzle. He offered a generous prize to the first person to solve the puzzle, while he secretly knew the puzzle had no solution. The prize in turn enticed consumers to buy the puzzle, and he would rest assured that the puzzle mathematically had no solution thus securing more his wealth. In this way, like it is that we employ a fuzzy logic to heuristic learning and solutions. When solutions are not completely right in the context of solving all unknowns we may find that a fuzzy sufficiency is met...usually this fuzziness amounts to how well over an iterated series is a solution and how much within acceptable error tolerances is a given solution, or when no solution is found period (as in the case of the permutation puzzle), how much time is spent pursuing a solution that is likely not to be found?!
I'd mention that deficits were found in the original process. As to whether these were originally owing to neither enough control conditions for all manner of circumstances that could occur in the graph generation process, or that something might have for instance, arisen in the original code implementing graph generation would lead to the creation of an added method which should hopefully conclude the all other aspects of tasks needed to solve a given problem. In this way also a heuristic fuzzy solution appears (the 2nd pass method) with respect to the time spent laboring over potential logical deficits on the first pass method and generally not seeing immediate reason given by the drawn methods in such pass. While the first pass method itself might have guaranteed that a large proportion of data were statistically accounted for in general, this were not 100% complete. This sort of solution work, I imagine could be very real world oriented especially with respect to large scale complex designs, or solving problems that need, for instance, redundant reapplication of work when methods are not as of yet completely sufficient. Consider also industrial quality control processes which aim to add extra layers in a given plant processing context for example.
No comments:
Post a Comment