Tuesday, February 24, 2015

Terrain Heightmap resampling

The code when tested isn't proper. I've have a later more recent post on this that has tested and corrected code that should work. You can find that on my blog a few posts ahead. Out of curiosity I did a little look up on the BiCubic re sampling method, so I found the following ordering used for supplied row coefficients.  Where  16 coefficients are used in the solution process.  According to the algorithm you'd need these sets of coefficients, or at least provisioning 16 points on the original 4 point grid, for example, (0,0), (1,0), (0,1), and (1,1) would be expanded to include the points (-1,-1), (-1,0), (-1, 1), (-1,2),
(0,-1), (0,0),(0,1),(0,2), (1,-1),(1,0),(1,1),(1,2), (2,-1),(2,0),(2,1),(2,2) and this leading into the computation and having handy the following:

\( H(0,0), H(1,0), H(0,1), H(1,1)\)
\( \partial H(0,0) / \partial x ,  \partial H(1,0) / \partial x,  \partial H(0,1) / \partial x,  \partial H(1,1) / \partial x \)
\( \partial H(0,0) / \partial y ,  \partial H(1,0) / \partial y,  \partial H(0,1) / \partial y,  \partial H(1,1) / \partial y \)
\( \partial^2 H(0,0) / \partial x \partial y ,  \partial^2 H(1,0) / \partial x \partial y,  \partial^2 H(0,1) / \partial x \partial y,  \partial^2 H(1,1) / \partial x \partial y \)

Of course, forward difference methods would work fine for the single partials.  I searched on the mixed partials and found a central difference algorithm for this.  At the moment, I hadn't thought of a method (albeit some method found typically in Image processing would suffice, I'd imagine at supplying this), that would merely encompass existing data sets without loss.  Generally if you hadn't minded a little extra computation for the height map, however, you could compute say width+2 and height+2 data points and then compute differences from iterated positions 1 to size leaving all computed ("seen") points with central difference values (without conditional application for boundary conditions).

Otherwise, I found Ogre also provides in the Image class (as would any basic image library, I'd imagine) image resampling with bi cubic and bi linear filtering options.  You could do this on, for instance, an image of a given height map likewise.

Here's my untested code example for generated through difference method partials.

typedef std::pair<int, int>                                  Coordpair;
typedef std::map<Coordpair, double>                           CPointsMap;

double partialx(double x, double y, CPointsMap heightmap){
 //central difference (more accurate relative to forward and backwards differencing)
 Coordpair * x1py = new Coordpair(x+1,y);
 Coordpair * x1my = new Coordpair(x-1,y);
 return (heightmap[(*x1py)] - heightmap[(*x1my)])/2.0f;
}

double partialy(double x, double y, terr::CPointsMap heightmap){
 //central difference (more accurate relative to forward and backwards differencing)
 Coordpair * xy1p = new Coordpair(x,y+1);
 Coordpair * xy1m = new Coordpair(x,y-1);
 return (heightmap[(*xy1p)] - heightmap[(*xy1m)])/2.0f;
}

double partialxy(double x, double y, CPointsMap heightmap){
 //finite difference central methods 
 //(H(x+1,y+1)-H(x+1,y) -H(x,y+1)+2H(x,y) - H(x-1,y)-H(x,y-1)+H(x-1,y-1))/2
 //or (H(x+1,y+1) -H(x+1,y-1) - H(x-1,y+1) + H(x-1,y-1))/4
 Coordpair * x1py1p = new Coordpair(x+1, y+1);
 Coordpair * x1py1m = new Coordpair(x+1, y-1);
 Coordpair * x1my1p = new Coordpair(x-1, y+1);
 Coordpair * x1my1m = new Coordpair(x-1, y-1);
 return (heightmap[(*x1py1p)]-heightmap[(*x1py1m)] -heightmap[(*x1my1p)] +heightmap[(*x1my1m)])/4.0f;
}

CPointsMap BuildBicubicResample( double size, double RSize, CPointsMap heightmap){
 CPointsMap * Rheightmap = new CPointsMap();
   //if we want to produce from a 515 x515 grid rsize we need to have the first min at floor 1,1 
   //and final max at ceiling size 513,513 we need a 514 position in the central difference compute,
   //so we'd need a total of 515 x 515 position in resampling a 513x513 grid.
 for (int i = 1; i < RSize+1; i++){
  for(int j = 1; j < RSize+1; j++){
   double x = ((double) i)*(size-1.0f)/RSize +1.0f;  //first row column always floored to 1 allowing central difference
   double y = ((double) j)*(size-1.0f)/RSize +1.0f;  //max row column always at ceiling = size 
   //int p0x = ((x == size ? (int)x - 1 : (int)x); int p0y = ((y == size ? (int)y - 1 : (int)y);
   p0x = (int)x; p0y = (int)y;
   //int p1x = ((x == size ? (int)x : (int)x + 1); int p1y = ((y == size ? (int)y : (int)y + 1); 
   p1x = (int)x+1; p1y = (int)y+1;
   //coefficient ordering is of the following form:
   // arr[0] = (H(p0x,p0y), H(p1x,p0y), H(p0x,p1y), H(p1x,p1y))
   // arr[1] = (partial x H(p0x,p0y) partial x H(p1x,p0y) partial x H(p0x,p1y), partial x H(p1x,p1y))
   //arr[2] = (partial y H(p0x,p0y) partial y H(p1x,p0y) partial y H(p0x,p1y), partial y H(p1x,p1y) )
   //arr[3] = (mpartial xy H(p0x,p0y) mpartial xy H(p1x,p0y) mpartial xy H(p0x,p1y), mpartial xy H(p1x,p1y))
   double arr[16];
   Coordpair * p0xp0y = new Coordpair(p0x,p0y);
   Coordpair * p1xp0y = new Coordpair(p1x,p0y);
   Coordpair * p0xp1y = new Coordpair(p0x,p1y);
   Coordpair * p1xp1y = new Coordpair(p1x,p1y);
   arr[0] = heightmap[(*p0xp0y)]; arr[1] = heightmap[(*p1xp0y)];
   arr[2] = heightmap[(*p0xp1y)]; arr[3] = heightmap[(*p1xp1y)];
   arr[4] = partialx(p0x, p0y, heightmap); arr[5] = partialx(p1x, p0y, heightmap);
   arr[6] = partialx(p0x, p1y, heightmap); arr[7] = partialx(p1x, p1y, heightmap);
   arr[8] = partialy(p0x, p0y, heightmap); arr[9] = partialy(p1x, p0y, heightmap);
   arr[10] = partialy(p0x, p1y, heightmap); arr[11] = partialy(p1x, p1y, heightmap);
   arr[12] = partialxy(p0x, p0y, heightmap); arr[13] = partialxy(p1x, p0y, heightmap);
   arr[14] = partialxy(p0x, p1y, heightmap); arr[15] = partialxy(p1x, p1y, heightmap);
   double height = bicubicInterpolate2 (arr, x, y);
   Coordpair * coord = new Coordpair(i,j);
   (*Rheightmap)[(*coord)] = height;
  }
 }
}

I have to correct myself.  This coefficient ordering can work when applied to computing against the 16x16 A matrix


Solve the for the alpha coefficients and the apply into the summation formula


Using, for instance the following...
double A[16][16] = {  
  { 1,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0},
  { 0,  0,  0,  0,  1,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0},
  {-3,  3,  0,  0, -2, -1,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0},
  { 2, -2,  0,  0,  1,  1,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0},
  { 0,  0,  0,  0,  0,  0,  0,  0,  1,  0,  0,  0,  0,  0,  0,  0},
  { 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  1,  0,  0,  0},
  { 0,  0,  0,  0,  0,  0,  0,  0, -3,  3,  0,  0, -2, -1,  0,  0},
  { 0,  0,  0,  0,  0,  0,  0,  0,  2, -2,  0,  0,  1,  1,  0,  0},
  {-3,  0,  3,  0,  0,  0,  0,  0, -2,  0, -1,  0,  0,  0,  0,  0},
  { 0,  0,  0,  0, -3,  0,  3,  0,  0,  0,  0,  0, -2,  0, -1,  0},
  {-9, -9, -9,  9,  6,  3, -6, -3,  6, -6,  3, -3,  4,  2,  2,  1},
  {-6,  6,  6, -6, -3, -3,  3,  3, -4,  4, -2,  2, -2, -2, -1, -1},
  { 2,  0, -2,  0,  0,  0,  0,  0,  1,  0,  1,  0,  0,  0,  0,  0},
  { 0,  0,  0,  0,  2,  0, -2,  0,  0,  0,  0,  0,  1,  0,  1,  0},
  {-6,  6,  6, -6, -4, -2,  4,  2, -3,  3, -3,  3, -2, -1, -2, -1},
  { 4, -4, -4,  4,  2,  2, -2, -2,  2, -2,  2, -2,  1,  1,  1,  1}};  //technically A^-12
double bicubicInterpolate2 (double[] arr, double x, double y){
   double a[16];
   for (int i = 0; i < 16; i++){
 double sum = 0.0f;
 for (int j = 0; j < 16; j++){
  sum += A[i][j]*arr[j];
 }
 a[i] = sum;
   }
   double ret = 0.0f;
   for (int i = 0; i<3; i++){
 for (int j = 0; j<3; j++){
  ret += a[i+j]*pow(x,i)*pow(y,j);  //a(i,j)
 }
   }
   return ret;
}
The method above appears to have additionally generated steps in the matrix multiplication and additions form... on the other hand this method...
CPointsMap BuildBicubicResample2( double size, double RSize, CPointsMap heightmap){
 CPointsMap * Rheightmap = new CPointsMap();
   //if we want to produce from a 515 x515 grid rsize we need to have the first min at floor 1,1 
   //and final max at ceiling size 513,513 we need a 514 position in the central difference compute,
   //so we'd need a total of 515 x 515 position in resampling a 513x513 grid.
 for (int i = 1; i < RSize+1; i++){
  for(int j = 1; j < RSize+1; j++){
   double x = ((double) i)*(size-1.0f)/RSize +1.0f;  //first row column always floored to 1 allowing central difference
   double y = ((double) j)*(size-1.0f)/RSize +1.0f;  //max row column always at ceiling = size 
   double p1x = (int)x; double p1y = (int)y;
   double p2x = (int)x+1; double p2y = (int)y+1;
   double p0x = p0x-1; double p0y = p0y-1;
   double p3x = p1x+1; double p3y = p1y+1; 

   double arr[4][4];
   Coordpair * p00 = new Coordpair(p0x,p0y);
   Coordpair * p01 = new Coordpair(p0x,p1y);
   Coordpair * p11 = new Coordpair(p1x,p1y);
   Coordpair * p02 = new Coordpair(p0x,p2y);
   Coordpair * p03 = new Coordpair(p0x,p3y);
   Coordpair * p10 = new Coordpair(p1x,p0y);
   Coordpair * p12 = new Coordpair(p1x,p2y);
   Coordpair * p13 = new Coordpair(p1x,p3y);
   Coordpair * p20 = new Coordpair(p2x,p0y);
   Coordpair * p21 = new Coordpair(p2x,p1y);
   Coordpair * p22 = new Coordpair(p2x,p2y);
   Coordpair * p23 = new Coordpair(p2x,p3y);
   Coordpair * p30 = new Coordpair(p3x,p0y);
   Coordpair * p31 = new Coordpair(p3x,p1y);
   Coordpair * p32 = new Coordpair(p3x,p2y);
   Coordpair * p33 = new Coordpair(p3x,p3y);
   arr = { {heightmap[(*p00)], heightmap[(*p01)], heightmap[(*p02)],
     heightmap[(*p03)]},
    {heightmap[(*p10)], heightmap[(*p11)], heightmap[(*p12)],
     heightmap[(*p13)]},
    {heightmap[(*p20)], heightmap[(*p21)], heightmap[(*p22)],
     heightmap[(*p23)]},
    {heightmap[(*p30)], heightmap[(*p31)], heightmap[(*p32)],
     heightmap[(*p33)]}
          };
   double height = bicubicInterpolate (arr, x, y); 
   Coordpair * coord = new Coordpair(i,j);
   (*Rheightmap)[(*coord)] = height;
  }
 }
}


Integrates both the computation of necessary derivatives, but also eliminates in the solution process zeroes that would be added steps when applied to the equation of the this form...
double cubicInterpolate (double p[4], double x) {
 return p[1] + 0.5 * x*(p[2] - p[0] + x*(2.0*p[0] - 5.0*p[1] + 4.0*p[2] - p[3] + x*(3.0*(p[1] - p[2]) + p[3] - p[0])));
}

double bicubicInterpolate (double p[4][4], double x, double y) {
 double arr[4];
 arr[0] = cubicInterpolate(p[0], y);
 arr[1] = cubicInterpolate(p[1], y);
 arr[2] = cubicInterpolate(p[2], y);
 arr[3] = cubicInterpolate(p[3], y);
 return cubicInterpolate(arr, x);
}

Cegui::OgreTexture fix incomplete type error in Linux for Basic Tutorial 7

  I've updated my info link at http://lotsofexpression.blogspot.com/2014/11/getting-started-with-ogre-experience.html  for this bit of info but I wanted to reiterate a share of information that could potentially be of aid.

If doing a Ogre build and cegui build using cmake, for some odd reason the cegui build doesn't include in source headers a reference to the Ogre/Texture.h  header file automatically as an inclusion when compiling your Ogre project.  Normally this wouldn't be so much of a problem if you weren't referencing Cegui::OgreTexture class, but if you do, this may lead, if you are trying to static_cast to an incomplete type or forward declaration issue.  Meaning the compiler, I would assume sort of sees a reference to OgreTexture but doesn't have enough complete information that weren't dependent upon incomplete inferred sources.  Thus  
 #include <CEGUI/RendererModules/Ogre/Texture.h> 

in your projects header file seems to resolve this issue.

Mostly you'd do this if you want to render any Ogre textures into CEGUI where, for instance, a gui widget could render in turn these textures.  

Sunday, February 22, 2015

Terrain heightmap Bump (Normal) mapping in Ogre


I've looked through a series of web sites on this topic, any number of these ranging in various degrees of technical information, but I've had at times a difficult times retrieving actual code application (outside of glsl open source information on any number of bump mapping techniques).

I did find one particular code example which basically used a forward difference method for approximating first partial derivatives, albeit the code was slightly off I think I managed to correct this, however, and append this in my revision, and secondly, normal mapping which does appear to have some slight technical differences need not require shading illumination in producing a normal map as opposed to the bump map which appears to integrate a given shade computation.

The normal map quite simply is, at least, the view of Ogre normal/heightmap combinations for a given terrain is just that which is an array of surface normals over the terrain's surface, and thus producing a normal map is actually technically a little less expensive computationally speaking...Ogre takes the surface normals and then renders the bump shading from this, if I am not mistaken.  The format that I've found that works is given literally from computational image processing articles (Nvidia being one source site), and is simply given by the surface Gradient vector which is of the form (-partial H(x,y)/partial x, -partial H(x,y)/partial y, 1.0)
where H(x,y) is the partial of the Height map function or using forward difference method.
Is partial H(x,y) / partial x = H(x+1, y)-H(x,y))  and partial H(x,y)/partial y = H(x,y+1) - H(x,y).

There is a scaling factor a that can be added into the gradient which effectively increases or decreases the strength of the Normal maps (outside of the main blue predominant axis) which then modifies the Gradient as
(-a* (H(x+1, y)-H(x,y)) , -a * (H(x,y+1) - H(x,y)), 1.0)

Thus an increase to a > 1 produces greater contributions to the shadow prominence in the rendered bump map shading and this can be used as the normal intensity factorization for Normal mapping.

I've also re scaled my height map values on the difference equations simply dividing the difference values by the total heightmap difference

diff = terrainMaxHeight() - terrainMinHeight on the terrain.

Thus the Gradient (surface normal) vector might look like:

(-a/diff * (H(x+1, y)-H(x,y)) , -a/diff * (H(x,y+1) - H(x,y)), 1.0)

In this way, all possible values to be amplified by the scaling factor a are not scaled with any pre existing value > 1.  You'd want to normalize the gradient vector (compute the normal of the Gradient vector), and then scale to rgb values
Gradvec/2 +.5  .  On my mapping to tweak the sensitivity in the normal mapping routine, I've actually decreased the value of the Grad.z from 1 to some lesser value > 0 which can work since the scaling factor a means that normalization will diminish the value of Grad.z if Grad.x and/or Grad.y are greater than 1.  This in effect leads to ,prior to normalization, a value Grad.z which tends to zero with increasing a>1 post normalization of the Gradient, but with re scaling Grad.z will always be a minimum of .5 regardless, hence the blue field is never completely absent if Grad.z prior to normalization >= 0.
In my case I've chose parameters of Grad.z around .1 and a =1 to around a = 1.5 although you could increase this factor where normals become extremely high.  

Thus the final surface normal vector  prepared for RGB output should look like:

Norm((-a/diff * (H(x+1, y)-H(x,y)) , -a/diff * (H(x,y+1) - H(x,y)), 1.0)) / 2 + .5

where diff = terrainMaxHeight() - terrainMinHeight, and a is the normal intensity factor (user supplied), H(x,y) is the Heightmap value at the given x,y coordinate position.


Ogre's terrain manager is also nice in another respect since the interpolators are stored alongside heightmap data  so that one can at any scale choose points on a given map without having to use the more accurate but expensive procedural method itself in picking a scale value for producing higher resolution maps.  Thus from a 513x513 pixel heightmap I can actually render 1026x1026 and higher maps by using the function getHeightAtTerrainPosition () and supplying a position from (0,1) in both the x and y direction.  Thus to produce a 1026x1026 map for a doubly iterated set of x and y positions I just need take the iterator value and divide by 1026  for either x and y position on both iterator values.  This is also better since recalling a value directly from an fBm method also recalls a heightmap position that is not actually rendered if, for instance, a lower resolution height map instance of such terrain is being used.

Here is an example surface normal map that I constructed alongside original texture.  The normal map is technically set for Ogre specifications being a Normal RGB map coupled with alpha heightmap set (although you can't see the transparency prominence here),




For procedural textures (that are technically not stored as terrain coordinate) data, I haven't researched higher resolution mapping methods, although arguably this probably not needed if one were using much smaller scaling given a tiling series.  If you were interested in this likely you'd want to build interpolating functions for your height map or procedural f(x,y) texturing output, store these coefficients in a given map for fast look ups applying this into your favorite interpolating equation method.  Anyways given the computational expense at doing this you'd avoid the more lengthy and long winded computations found through the higher resolution re scaling method on such texture topology.

By the way my terrain texture above is 3078 x 3078, produced from originally 513x513 heightmap data that were of likely some higher order interpolating method (quadratic or higher)...
see also Bicubic re sampling.  You can also find a c++ implementation of bicubic resampling at
http://www.paulinternet.nl/?page=bicubic.  Thus to set up bicubic interpolation, first you'd simply set up the parray (in the link's algorithm) where any two grid points (on the original two dimension set of point for a given height map) have a 4 tuple/vector coefficient set as follows:
(p0, p0', p1, p1') where p0 = H(x0,y0) and p1 = H(x1,y1), for instance, and p0' and p1' are the first derivatives at such point along the given axis (directional derivatives)...the method above however also computes these directional derivatives that can in turn be supplied as p0' and p1' coefficients (you just need to track and store these on your normal mapping routine).  The link provides further math information if you want details on how this stuff works.

If you wondering about formulation of the surface normal, this is derived from the cross product of the partial derivative vectors of the height map function with respect to x and y.  Technically this is shown with a negative x component, positive y component and positive z component.

\( \partial H / \partial x \) direction vector \(= (1,0,\partial H / \partial x) \)
\( \partial H / \partial y \) direction vector \(= (0,1,\partial H / \partial y) \)

Sunday, February 15, 2015

Heuristic processes in modeling

I am not certain if it is often given as in the case of modeling a particular problem, but also given the complexity of such a problem in terms of conceivable outcomes.  I found myself considering much the heuristic aspect of implementing such model.  Experiment trials yield error which lead to refinements of a given model solution until one hopefully has achieved something with little to no error when the model solution is not completely known at the outset.

Then thinking of the human aspect of judging when and where error were concerned in my case, came with presentation of visual errors in such data.  For a machine this would amount to looking in terms of numerical analysis for jump discontinuities in a given data set, indicating that model deficits were likely which in turn would lead to a new hypothesis on the cause of such error.  Probably the bigger leap in terms of intelligently learning from errors.  In terms of the model at hand, a thought of rasterization method, it would turn out that this blocks of data were not properly being assigned at certain boundary conditions where re iterations in a given process should occur.  Thus the problem at hand should logically revolve around conditional structures of re iterating such process and where these were incorrect.  This would in turn lead to a refinement of conditional structures surrounding the decision surrounding both the choice of boundary conditions and added to how closely one need approximate to such boundary condition.  In the absence of being able to delineate a separability between ambiguous circumstances that would arise for data points being too close, for instance, to a given boundary condition relative to boundary conditions not present.

A more in depth discussion of the problem:

   The problem that I had concerning the approach of determining Voronoi pixel data should go as follows:
1.  Use the cell site of the graph as a seed for distance approximating all nearby points inside such point.
2.  Determine boundary ymin ymax boundary conditions on the cell site to a given neighbor edge where a the cell site were between two such edge start end positions.  There would ideally be two edges, but this assumption does not always hold.
3.  Use the same process on a given x Axis similarly not always two edges are sufficient for describing boundary conditions.
4.  When hitting boundary condition of the cells local ymax/ymin through step iterating all points in between for voronoi cell set inclusion, one should compare such neighboring position with respect to the cells absolute y max/ymin position.  If not within a nearest vicinity, reiterate processes in 2 and 3 choosing either the x of ymax/ymin or a sufficient near to boundary condition point within proximity to x of ymax/ymin.

As it turns out one big potential root error was the assumption that boundary conditions could always be described in such modeling process which would lead to sufficiency in generating the the conditional structures in reiterating the rasterization method.  I could have implemented additional methods and means for disambiguation of the data set.  However, I instead in such solution choose to avoid iterating over boundary condition points (as it turns out in the problem) edge related data of a Voronoi graph (since this would lead to the condition of false assumption that a boundary condition were actually given by the condition that no boundary condition or edge should exist in describing such boundary condition), and then chose a second pass method (neighbor approximation) since a point on a voronoi graph edge is equidistant to all nearby neighboring nodes, one could use the logic that any neighboring point (+/- 1) increment would adequately define a nearest cell, and thus I could adequately determine these boundary condition points since the first pass method hadn't improperly defined boundary points.   The remaining collection of undetermined pixels on the graph were left overs between two sets of data points which were the skipped edge boundaries of the Voronoi cell graph.  Generally at all cost, I wanted to avoid also the logical step of iterating blindly through cells to find any nearest neighbor, which is expensive in the brute force methods of Voronoi graph generation (and quite slow).  An alternate pass method in more blind approach might have taken use of a grid addressing coordinate system for refining possible candidate cells sites) here any non rendered pixels might be rendered by choosing cell sites in nearest to such grid square address.  Although this is completely not used edge rendering in fortune based graph generation.

Some added thoughts on refining the learning process.  As it turns out in heuristic learning we may also be inclined if there is no solution to 'give up' before we have discovered or reasoned well enough that no solution exists.  I recall an infamous and once popular puzzle that were created by a man that sought to offer a prize for a solution to a game which involved using manipulating squares for a permutation type puzzle.  He offered a generous prize to the first person to solve the puzzle, while he secretly knew the puzzle had no solution.  The prize in turn enticed consumers to buy the puzzle, and he would rest assured that the puzzle mathematically had no solution thus securing more his wealth.  In this way, like it is that we employ a fuzzy logic to heuristic learning and solutions.  When solutions are not completely right in the context of solving all unknowns we may find that a fuzzy sufficiency is met...usually this fuzziness amounts to how well over an iterated series is a solution and how much within acceptable error tolerances is a given solution, or when no solution is found period (as in the case of the permutation puzzle), how much time is spent pursuing a solution that is likely not to be found?!

I'd mention that deficits were found in the original process.  As to whether these were originally owing to neither enough control conditions for all manner of circumstances that could occur in the graph generation process, or that something might have for instance, arisen in the original code implementing graph generation would lead to the creation of an added method which should hopefully conclude the all other aspects of tasks needed to solve a given problem.  In this way also a heuristic fuzzy solution appears (the 2nd pass method) with respect to the time spent laboring over potential logical deficits on the first pass method and generally not seeing immediate reason given by the drawn methods in such pass.  While the first pass method itself might have guaranteed that a large proportion of data were statistically accounted for in general, this were not 100% complete.  This sort of solution work, I imagine could be very real world oriented especially with respect to large scale complex designs, or solving problems that need, for instance, redundant reapplication of work when methods are not as of yet completely sufficient.  Consider also industrial quality control processes which aim to add extra layers in a given plant processing context for example. 

Sunday, February 8, 2015

On the other hand...

The Color Out of Space

On the other hand...

Your sun

by the way would eat your nukes for breakfast.

Sorry this sheer juvenile post makes me laugh.


There's a lot to be measured in written word supposedly...

Rolling eyes at this one.

Sorry Chris, I am more than an annoyance apparently...

Why the worst Christmas Tree?!  Still usable like any good fire hydrant.  :D

Dog park, eh... :D

If I was a bigger tea drinker, I'd mail order some Murchies in remembrance of things past.




Burnt by the Sun

I could recall only so many years being mocked here...

there was a moment that I could have swore when the sun felt like it must have dropped from the sky and burnt my cheeks.  All before I was made to cry the 'right' tears.  Hmmm...

I tell the truth by the way.  :D


I am waiting for the next story called Adaptive robots on Mars invade Earth

This is where the little bots go crazy and find the holy grail for prodigious self replication.  When space runs out on Mars they go for Earth!  Original story?!



Next filler piece...

Who do you think you are to write and self publish anything?!

Or more so it seems when given the opportunity to a voice that one were given to saying little on anything, and being like anything else conformed well in a given place in life.

I see some writing about taking risk, but really publishers, seem to publish about those that talk about saying risky rather than ironically obliging towards the concept of risk.  This sort of risk is talking openly about things that matter financially to the publishing world.  Whether given to the degree of less than substantial fluff, or really another voice to the noise of ideology.   I remember a day so many years ago when a voice were given to change over night a voice at the behest of a power, and likely on American soil.

Cultivating so much a given reputation with the right sort of crowd, and saying the right sorts of appealing things that people love to hear, is a commonplace way of life, linkedin for the lifeline.
In another way it seems so much more like military culture these days as opposed to a civilian democracy, at least in military culture the respected person is given by way of rank and merit, and hence something of the right of authorship, or basically authority in saying, and given all the right to burden underlings in the process, especially hearing redundant and banal media voices paid often times in such a lobby even to accuse another of banality, and likely this is the creativity of militaristic culture for you.


Saturday, February 7, 2015

This next blog is filler and to gain stats

Although an intent to disprove the necessity in being someone with a name and reputation relative to self publication.

Generally there isn't much of a name known might suspect for writing, infamy (at least somewhere salacious gossip quieted down), ummm, programming while lacking all manner of organization...my behemoth is growing however.  I have repeated for the n teenth time another LERP  inlined on another file.  Yet I am still happy with myself not for writing anything that is potentially of sale value, or even having the airs of finer publication...O'Reilly media ignores me.  I gained too much weight to be batman now.  I am middle aged.

Crap I am close to being a dead trouser I am afraid!

Crap I am close to being a dead trouser I am afraid!  By the way that's a german idiom at least in so far as my meaning intent.

Oh well I am busy I guess that's what matters.

I am watching all these Japanese films on getting married...who does that here?!  Let alone in Japan any more?!




Oblivion

 Between the fascination of an upcoming pandemic ridden college football season, Taylor Swift, and Kim Kardashian, wildfires, crazier weathe...