It is Humanity has made it to the stars. Fassin Taak, a Slow Seer at the Court of the Nasqueron Dwellers, will be fortunate if he makes it to the end of the . Surface Detail by Iain M. Banks is a science fiction novel in his Culture series, first published in . Create a book · Download as PDF · Printable version. Surface Detail. • Problems with Interpolated Shading. • Surface Detail. • Texture Mapping. • Texture Synthesis. • Bump Mapping.
|Language:||English, Spanish, Indonesian|
|ePub File Size:||28.58 MB|
|PDF File Size:||10.25 MB|
|Distribution:||Free* [*Sign up for free]|
Surface Detail. Home · Surface Detail Author: Banks Iain M Surface Detail · Read more · Surface Detail Culture 9 Surface Detail · Read more · Burial Detail . Culture 9 Surface Detail. Home · Culture 9 Surface Detail Author: Iain M Banks Surface Detail · Read more · Surface Detail. Read more · Surface Detail. Iain m. Banks - [Culture the 08] - Surface Detail (v) - Ebook download as PDF File .pdf), Text File .txt) or read book online.
Our method models these features using a vector representation that is efficiently stored in two textures. First texture is used to specify the position of the features, while the second texture contains their paths, profiles and material information. A fragment shader is then proposed to evaluate this data on the GPU by performing an accurate and fast rendering of the details, including visibility computations and antialiasing. Some of our main contributions include a CSG approach to efficiently deal with intersec- tions and similar cases, and an efficient antialiasing method for the GPU. This technique allows application of path-based features such as grooves and similar details just like traditional textures, thus can be used onto general surfaces. Key words: surface detail, real-time rendering, vector graphics 1.
To take into account the local curvature, it can be simply done by modifying the direction of the viewing ray at each visited texel by taking into account the shape of the object.
For instance, this can be done by simple tracking an approximate curvature along the ray, as described by , or by also taking into account the stretching introduced by the parameterizations, as done in .
Evaluating Simple Cases For each texel where the contained geometry must be evaluated for inter- section, the process starts by retrieving the corresponding data entry from the grid texture at the current cursor, and looking if that texel contains fea- tures or not i.
If no features are found, the ray 10 simply is verified for intersection with the base surface, while for the former the features are retrieved from the data texture and evaluated. In any case, if the intersection does not lay in the current texel, the search continues as explained in the previous section.
See pseudocode at Algorithm 2, where mat stands for material and N for the surface normal. In the next section we will extend this explanation for the case where more feature elements are present in the same texel. In this simpler case, the local geometry at the current texel can be approx- imated using a 2D cross-section. Once the feature element is retrieved along with its cross-section, the computations for the isolated feature begin.
For intersect- ing a single ray with the 2D profile of a feature element, we should project the ray onto the local cross-section plane and intersect it with each profile segment. This, however, can be simplified by using a point-sampling adap- tation of the algorithm explained in . Using this approach, we simply project each profile facet onto the base surface according to the ray direc- tion, and evaluate each segment against the current surface point, which can be done with simple 1D operations.
The surface point is here represented by the distance between the intersection of the ray with the base surface and 11 the current path direction, as depicted in Figure 4.
By sequentially evalu- ating the facets according to the ray direction, notice that we automatically deal with any possible occlusion between the facets.
For our approximate antialiasing strategy, this information about projected facets is then used as explained in Section 5. For the case of a curved feature, visibility is com- puted in the same way by locally approximating the feature as a straight one. For this, we simply need to determine its current tangent direction by com- puting the closest point on curve to that point with the algorithm described in .
When ray tracing, we compute its spans through the feature body right , and we subtract them from the ray path, resulting in the final intersection point below. In the case where more than one feature is present in the same texel, computations must be performed to determine their actual intersection, if any. In this section we will explain the case of multiple features parallel or 12 intersecting , leaving for the next section other cases like intersected ends, isolated ends, or corners.
In order to perform these computations we use a Constructive Solid Geometry CSG analogy, which takes as input the inter- sections between the ray and each individual feature profile, and combines them to find the final intersection result.
This method is much more elegant, simpler and faster to compute than the one presented in , see Section 5. If we look at Figure 5, we can see an example with two simple intersecting features.
If the features have no protruding parts, we can think that the features were built by removing material from the base surface. This is like building the features with Constructive Solid Geometry CSG , in the sense that we consider we have a CSG tree: from a flat, solid surface, we subtract the volume of each of the features in turn, resulting in holes that can be ray-traced . The first thing to do is to compute the intersections of the ray with each profile independently of each other, using the algorithm described in the pre- vious section.
In this case, however, we need to compute all the intersection points, not only the first one. As the profiles are known in advance, the maximum number of intersections can be pre-computed and used to define the size of the vectors used in the fragment shader to evaluate the features.
We start evaluating every profile in an iterative process, combining the inter- sections of the ray with the current profile with the result so far according to the CSG operation.
This is shown in Algorithm 3. See Figure 6. In those case, and following our CSG analogy, we can think about adding union the material for peaks, and then subtracting the parts of the feature that are below the base surface. Ray tracing becomes a regular CSG operation. See Figure 6 a. Furthermore, all the additions must be performed before any subtraction to obtain the desired result.
In practice, this can be achieved by taking into account that the intersections of a ray and a profile are already sorted by the intersection process itself. We simply need to sequentially classify the ray segments as adding or removing material after each intersection. Initial segments only need to be classified if intersecting an internal profile facet, which is done as subtractions to simulate the wedge described before.
After that, we propose to combine the seg- ments using a special CSG operation that performs a subtraction when either segment is subtracting, otherwise it performs an addition. This procedure is repeated for every groove present in the current texel, subsequently com- bining their ray segments.
At the end, the visible point is the first addition point found. In our current implementation, the assignment of internal and external faces is manually done, but it is not difficult to do it automatically by starting on both extremes and classify every face towards the center as external, until the normal of the feature face changes its orientation with respect to the direction of classification. Special Geometries Other situations such as the ones depicted in Figure 7 can be evaluated in a similar way by considering more CSG operations.
Intersected ends, 14 Figure 7: Special situations. From left to right: intersected end, isolated end, and corner. In the figure, green parts have a priority assigned. When evaluating intersections using our approach of combining ray seg- ments, note that each segment has a priority related to the order of the corresponding CSG operation. Hence, material subtractions have priority over additions because these are performed later. Using the same idea, inter- sected ends are simulated by simply giving higher priority to the ray segments belonging to the recovered portion right side of the feature in Figure 7 left.
During ray-profile intersections, this means that ray segments need to be classified as additions or subtractions as before, but giving more priority to the ones intersecting the prioritized side of the profile, if any.
During the ray combination, priorities will be used in the same way, but considering an extra priority level.
Regarding isolated ends Figure 7 middle , we first need to consider an extra feature perpendicular to the main one in order to pro- cess this as an intersection. This feature only uses half the original profile, and both profiles are prioritized so that we can obtain the desired result.
A similar procedure can be applied for corners, as shown in Figure 7 right. The different priorities associated to the features are previously stored in the data texture, as stated in Section 4. During a pre-processing step, we first determine which special cases are contained in each cell, and then assign the corresponding priorities to each feature. Since priorities always affect one side or another of a feature profile, we only need to specify which side of the feature has priority, i.
Priorities can thus be efficiently stored as a single tag along with the 15 feature element data, stating if its profile will have left, right, both-sides or no priority.
In order to handle different situations in the same cell e. These priorities will be properly used when combining the ray segments from two consecutive features.
Profile Perturbations With a method like the one presented here it is very easy to perform a variation of the feature profile along its path by means of a perturbation function. As explained in Section 4, we left two texture channels for this purpose.
One interesting and flexible way of doing this is to store the values of a 2D global parameterization of the path into these two entries.
This way, for every feature element, we would know the value of this parameter for both ends of the segment. For instance, if the path is parameterized in a way such that the parameter has value 1 in one end and 0 in the other, the path can be reduced in size from full width to zero along its way, as can bee seen in the cracks of Figure 8.
This variation can also be associated with a functional expression, like a sinus top left of Figure 8 or a polynomial one that could be stored in the texture and then be evaluated in rendering time. Shadowing Once the intersection of the ray and the path-based detail is found, shad- owing computations are performed.
The lighting step of these computations repeats the previous steps, but this time with respect to the light source direction. For this, the visible point is first re-projected onto the object surface according to the light direction.
If after repeating the process there is a blocker, i. If not, it is illuminated and the material at that point is retrieved, the normal is computed from the visible facet coordinates, and shading is finally computed. Approximate Antialiasing The method presented so far is intended for point-sampling, which gener- ates aliasing artifacts.
In this section we explain how to include an efficient antialiasing both for the direct visualization and for shadowing. This method 16 Figure 8: Some examples of path-based features rendered with our point sampling ap- proach. From top left to bottom right: a curved path with a sinusoidal profile variation, a non-height field profile, a one-sided profile with a square path , cracks, protruding letters and complex-profile scratches onto spherical shapes.
In order to compute an anti-aliased version of the shader, the first step is to determine the footprint of the pixel in texture space, just as done in anisotropic texture filtering. This footprint, however, may overlap several texels, which would require the evaluation of the multiple features contained within. The exact solution for this problem was presented in , but their solution is unfeasible for real-time rendering. We decided to implement an approximation by evaluating only those texels traversed by the current ray, as before.
However, the intersection with the features is now done by taking into account the pixel footprint. Each time an intersection is computed, the footprint is subsequently reduced and the traversal continues until the entire footprint has been processed, as explained below, and depicted in Figure 9. As our texels were extended with nearby information See Section 4.
Now, depending on the situation we are in, we take two main approaches to intersect the footprint with the features. One we call it region-sampling, 17 Viewing Pixel Footprint direction Reduced Footprint Texel size Object surface Ray marching direction Visible regions Figure 9: Pixel content is determined in a front to back order, iteratively visiting each 2D profile.
Each time an intersection is found, the pixel footprint is reduced proportionally to the covered area and the process continues until the footprint is fully covered. Finally, we also consider the case of mixed situations, where we can see through a pixel both isolated grooves and, for instance, intersecting features. The main difference is that now we need to consider all the visible profile facets contained in the footprint, not only the one intersecting with the viewing ray. This will require the evaluation of the different visible segments, which can be easily accomplished in 1D by projecting the profile onto the base surface, as done during point sampling see Section 5.
Computing an anti-aliased final color for the pixel footprint is then just a sum over all visible feature segments, weighting the color computed for each segment by the relative size of the projected segment.
Note that during the shadowing step, the shadowed portions of these facets should be removed in order to obtain a correct result. Since storing each previous visible segment for its later shadowing test would be costly, we decided to only store the points delimiting each visible profile region see Figure 9.
By reprojecting these points during the shadowing step, we can then easily determine which facets are both visible and illuminated for the final color computation.
This algorithm is somehow similar to the line sampling method proposed in , but with this last part being more suitable for a GPU implementation.
In those cases, we better resort to a supersampling strategy, where the footprint is eval- uated using different samples and blended by means of a weighting filter.
In order to compute these samples, the texture is only traversed once, and in case where a texel contains a special case, the samples are evaluated as the features are decoded. This is acceptable as the grid texture is computed using extended features, as explained above, and requires less computations than a full supersampling strategy.
In our examples, we use 4 samples taken halfway between the pixel center and its corners and average them using a simple box filter, which tends to give acceptable results.
In , antialiasing on groove intersections and ends where analytically evaluated using a polygonal footprint projected on each feature facet, but this would be clearly unfeasible for the GPU. Rendering times are in frames per second and memory in kilobytes. The resolution of the two textures is also included.
If we apply region sampling, the footprint area is proportion- ally reduced to the ratio of the length of the projected visible regions to the total footprint length. If we apply supersampling, the footprint reduction ratio is the number of samples that missed any feature in the current texel and continued beyond the current texel to the total number of samples.
Dealing with mixed cases, however, may become difficult during the shadowing step, especially when mixing visible re- gions with points. In those cases, its simpler to consider visible regions as point samples and evaluate them using supersampling as well.
These cases only happen in some especial situations, and will result in a sim- ilar quality than the one obtained at intersections and ends. Note that region sampling is far more accurate and fast than using supersampling, so whenever is possible, is better to use this technique. The rendering times for the images are included in Table 1, and correspond to the shader running on a GeForce As can be seen, this table also includes the memory consumption due to our different textures as well as their resolution.
In Figure 8 we can see some results of the possibilities that this method opens: features with curved paths, features with perturbations along their path like a sinusoidal variation and a linear one for cracks , one-sided pro- 20 files that allow modelling of depreciations or protruding surfaces, protruding features like text, and features over curved surfaces.
These images where rendered using our point-sampling method. Observe that the images show masking and shadowing effects and different special situations like feature intersections, ends, and corners. Our method allows the correct rendering of all these effects at real-time frame rates. Further- more, our textures require a low memory consumption: the grid textures used in these examples are very small, going from 25x25 to x, and data textures are even lower.
See Table 1. In Figure 1, an example of a more complex scene is shown, this time ren- dered using our antialiasing scheme. The engravings in the Knight Champion armor were generated directly from the displacement map the artist provided with the model. This was done by vectorizing the image with Inkscape and drawing the segments in texture space for the generation of the textures with a Maya plug-in.
The obtained textures require a higher resolution due to the complexity of the pattern see Table 1. She is unexpectedly reincarnated "revented," in Culture terms aboard a Culture ship, having been secretly implanted with a neural lace some ten years before. She immediately wants to return to her homeworld, to find and kill Veppers. The book hinges on Veppers' involvement in the War in Heaven. He initially appears to be a bit-player, but his involvement is gradually revealed to be more and more critical.
The hardware is located on his country estate on the planet Sichult. He sets up a secret deal to have the Hells destroyed in an attack which is to be blamed upon the Culture. His motivation is that such an attack will release him from his contracts to run the Hells, which would have become worthless if the anti-Hell side won, but which he cannot wriggle out of in any other way. As they and their ships arrive, things come to a head. The Culture agents conspire to arrange that the attack on Veppers' estate successfully destroys the Hells, while simultaneously appearing to attempt to stop the attack.
They also conspire to ensure that Veppers' secret deal is revealed. Veppers himself is at the estate's mansion during the attack, where Y'breq finds him for the final, personal, showdown.
The epilogue reveals Vatueil's identity as Zakalwe, using an alias that is an anagram of Livueta, the character in Use of Weapons whose forgiveness Zakalwe sought. Roz Kaveney of The Independent said that this was a poor book to introduce new readers to the Culture, but "far from the worst introduction to Banks's series.
From Wikipedia, the free encyclopedia. Surface Detail Author Iain M. Retrieved The Independent. The Herald. The Bookbag.
Works of Iain Banks. The Culture series by Iain M. Retrieved from " https: Space opera novels Novels by Iain M.