tag:blogger.com,1999:blog-39660843624991075032024-03-05T11:21:58.159+01:00pixel jet streampixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comBlogger19125tag:blogger.com,1999:blog-3966084362499107503.post-76775378681287680062015-02-06T23:20:00.000+01:002015-04-17T22:08:24.120+02:00Life of a triangle - NVIDIA's logical pipelineHi, while gathering public material on how the hardware works, I tried to create a compressed architecture image. It is based on images and information taken from the listed NVIDIA sources, it may not be free of errors but should help clear up some misconceptions (and hopefully not spawn more ;) ).<br />
<ul>
<li><a href="http://www.hardwarebg.com/b4k/files/nvidia_gf100_whitepaper.pdf">Fermi Whitepaper</a></li>
<li><a href="http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf">Kepler Whitepaper</a></li>
<li><a href="http://international.download.nvidia.com/geforce-com/international/pdfs/GeForce_GTX_980_Whitepaper_FINAL.PDF">Maxwell Whitepaper</a></li>
<li><a href="http://www.highperformancegraphics.org/previous/www_2010/media/Hot3D/HPG2010_Hot3D_NVIDIA.pdf">Fast Tessellated Rendering on Fermi GF100</a></li>
<li><a href="http://on-demand.gputechconf.com/gtc/2013/presentations/S3466-Programming-Guidelines-GPU-Architecture.pdf">Programming Guidelines and GPU Architecture Reasons Behind Them</a></li>
</ul>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://ck.luxinia.de/blog/fermipipeline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://ck.luxinia.de/blog/fermipipeline.png" width="375" /></a></div>
<br />
<h1>
GPUs are super parallel work distributors</h1><br />
Why all this complexity? In graphics we have to deal with data amplification that creates lots of variable workloads. Each drawcall may generate a different amount of triangles. The amount of vertices after clipping is different from what our triangles were originally made of. After back-face and depth culling, not all triangles may need pixels on the screen. The screen size of a triangle can mean it requires millions of pixels or none at all.
<br />
<br />
As a consequence modern GPUs let their primitives (triangles, lines, points) follow a logical pipeline, not a physical pipeline. In the old days before G80's unified architecture (think DX9 hardware, ps3), the pipeline was represented on the chip with the different stages and work would run through it one after another. G80 essentially reused some units for both vertex and fragment shader computations, depending on the load, but it still had a serial process for the primitives/rasterization and so on. With Fermi the pipeline became fully parallel, which means the chip implements a logical pipeline (the steps a triangle goes through) by reusing multiple engines on the chip.
<br />
<br />
Let's say we have two triangles A and B. Parts of their work could be in different logical pipeline steps. A has already been transformed and needs to be rasterized. Some of its pixels could be running pixel-shader instructions already, while others are being rejected by depth-buffer (Z-cull), others could be already being written to framebuffer, and some may actually wait. And next to all that, we could be fetching the vertices of triangle B. So while each triangle has to go through the logical steps, lots of them could be actively processed at different steps of their lifetime. The job (get drawcall's triangles on screen) is split into many smaller tasks and even subtasks that can run in parallel. Each task is scheduled to the resources that are available, which is not limited to tasks of a certain type (vertex-shading parallel to pixel-shading).
<br />
<br />
Think of a river that fans out. Parallel pipeline streams, that are independent of each other, everyone on their own time line, some may branch more than others. If we would color-code the units of a GPU based on the triangle, or drawcall it's currently working on, it would be multi-color blinkenlights :)
<br />
<br />
<h1>
GPU architecture</h1><br />
<center>
<br />
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_maxwell_gpu.png" />
<br /><br />
</center>
Since Fermi NVIDIA has a similar principle architecture. There is a <b>Giga Thread Engine</b> which manages all the work that's going on. The GPU is partitioned into multiple <b>GPCs</b> (Graphics Processing Cluster), each has multiple <b>SMs</b> (Streaming Multiprocessor) and one <b>Raster Engine</b>. There is lots of interconnects in this process, most notably a <b>Crossbar</b> that allows work migration across GPCs or other functional units like <b>ROP</b> (render output unit) subsystems.<br />
<br />
The work that a programmer thinks of (shader program execution) is done on the SMs. It contains many <b>Cores</b> which do the math operations for the threads. One thread could be a vertex-, or pixel-shader invocation for example. Those cores and other units are driven by <b>Warp Schedulers</b>, which manage groups of 32 threads as warps and hand over the instructions to be performed to <b>Dispatch Units</b>. The code logic is handled by the scheduler and not inside a core itself, which just sees something like <i>"sum register 4234 with register 4235 and store in 4230"</i> from the dispatcher. A core itself is rather dumb, compared to a CPU where a core is pretty smart. The GPU puts the smartness into higher levels, it conducts the work of an entire ensemble (or multiple if you will). <br />
<br />
How many of these units are actually on the GPU (how many SMs per GPC, how many GPCs..) depends on the chip configuration itself. As you can see above GM204 has 4 GPCs with each 4 SMs, but Tegra X1 for example has 1 GPC and 2 SMs, both with Maxwell design. The SM design itself (number of cores, instruction units, schedulers...) has also changed over time from generation to generation (see first image) and helped making the chips so efficient they can be scaled from high-end desktop to notebook to mobile.<br/>
<br />
<h1>
The logical pipeline</h1><br />
For the sake of simplicity several details are omitted. We assume the drawcall references some index- and vertexbuffer that is already filled with data and lives in the DRAM of the GPU and uses only vertex- and pixelshader (GL: fragmentshader).
<ol>
<center>
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_begin.png" />
<br /><br />
</center>
<li>The program makes a <b>drawcall</b> in the graphics api (DX or GL). This reaches the driver at some point which does a bit of validation to check if things are "legal" and inserts the command in a GPU-readable encoding inside a <b>pushbuffer</b>. A lot of bottlenecks can happen here on the CPU side of things, which is why it is important programmers use apis well, and techniques that leverage the power of today's GPUs.
</li>
<li>After a while or explicit "flush" calls, the driver has buffered up enough work in a pushbuffer and sends it to be processed by the GPU (with some involvement of the OS). The <b>Host Interface</b> of the GPU picks up the commands which are processed via the <b>Front End</b>.
</li>
<li>We start our work distribution in the <b>Primitive Distributor</b> by processing the indices in the indexbuffer and generating triangle work batches that we send out to multiple GPCs.
</li>
<center>
<br />
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_sm.png" />
<br /><br />
</center>
<li>Within a GPC, the <b>Poly Morph Engine</b> of one of the SMs takes care of fetching the vertex data from the triangle indices (<b>Vertex Fetch</b>).
</li>
<li>After the data has been fetched, warps of 32 threads are scheduled inside the SM and will be working on the vertices.
</li>
<li>The SM's warp scheduler issues the instructions for the entire warp in-order. The threads run each instruction in lock-step and can be masked out individually if they should not actively execute it. There can be multiple reasons for requiring such masking. For example when the current instruction is part of the "if (true)" branch and the thread specific data evaluated "false", or when a loop's termination criteria was reached in one thread but not another. Therefore having lots of branch divergence in a shader can increase the time spent for all threads in the warp significantly. Threads cannot advance individually, only as a warp! Warps, however, are independent of each other.
</li>
<li>The warp's instruction may be completed at once or may take several dispatch turns. For example the SM typically has less units for load/store than doing basic math operations.
</li>
<li>As some instructions take longer to complete than others, especially memory loads, the warp scheduler may simply switch to another warp that is not waiting for memory. This is the key concept how GPUs overcome latency of memory reads, they simply switch out groups of active threads. To make this switching very fast, all threads managed by the scheduler have their own registers in the <b>register-file</b>. The more registers a shader program needs, the less threads/warps have space. The less warps we can switch between, the less useful work we can do while waiting for instructions to complete (foremost memory fetches).
</li>
<center>
<br />
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_memoryflow.png" />
<br /><br />
</center>
<li>Once the warp has completed all instructions of the vertex-shader, its results are being processed by <b>Viewport Transform</b>. The triangle gets clipped by the clipspace volume and is ready for rasterization. We use L1 and L2 Caches for all this cross-task communication data.
</li>
<center>
<br />
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_raster.png" />
<br /><br />
</center>
<li>Now it gets exciting, our triangle is about to be chopped up and potentially leaving the GPC it currently lives on. The bounding box of the triangle is used to decide which raster engines need to work on it, as each engine covers multiple tiles of the screen. It gets sent out to one or multiple GPCs via the <b>Work Distribution Crossbar</b>. We effectively split our triangle into lots of smaller jobs now.
</li>
<center>
<br />
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_mid.png" />
<br /><br />
</center>
<li><b>Attribute Setup</b> at the target SM will ensure that the interpolants (for example the outputs we generated in a vertex-shader) are in a pixel shader friendly format.
</li>
<li>The <b>Raster Engine</b> of a GPC works on the triangle it received and generates the pixel information for those sections that it is responsible for (also handles back-face culling and Z-cull).
</li>
<li>Again we batch up 32 pixel threads, or better say 8 times 2x2 pixel quads, which is the smallest unit we will always work with in pixel shaders. This 2x2 quad allows us to calculate derivatives for things like texture mip map filtering (big change in texture coordinates within quad causes higher mip). Those threads within the 2x2 quad whose sample locations are not actually covering the triangle, are masked out (gl_HelperInvocation). One of the local SM's warp scheduler will manage the pixel-shading task.</li>
<li>The same warp scheduler instruction game, that we had in the vertex-shader logical stage, is now performed on the pixel-shader threads. The lock-step processing is particularly handy because we can access the values within a pixel quad almost for free, as all threads are guaranteed to have their data computed up to the same instruction point <a href="https://www.opengl.org/registry/specs/NV/shader_thread_group.txt">(NV_shader_thread_group)</a>.
</li>
<center>
<br />
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_end.png" />
<br /><br />
</center>
<li>Are we there yet? Almost, our pixel-shader has completed the calculation of the colors to be written to the rendertargets and we also have a depth value. At this point we have to take the original api ordering of triangles into account before we hand that data over to one of the <b>ROP</b> (render output unit) subsystems, which in itself has multiple ROP units. Here depth-testing, blending with the framebuffer and so on is performed. These operations need to happen atomically (one color/depth set at a time) to ensure we don't have one triangle's color and another triangle's depth value when both cover the same pixel.
<br />NVIDIA typically applies memory compression, to reduce memory bandwidth requirements, which increases "effective" bandwidth (see GTX 980 pdf).
</li>
</ol>
Puh! we are done, we have written some pixel into a rendertarget. I hope this information was helpful to understand some of the work/data flow within a GPU. It may also help understand another side-effect of why synchronization with CPU is really hurtful. One has to wait until everything is finished and no new work is submitted (all units become idle), that means when sending new work, it takes a while until everything is fully under load again, especially on the big GPUs.<br>
<br />
In the image below you can see how we rendered a CAD model and colored it by the different SMs or warp ids that contributed to the image <a href="https://www.opengl.org/registry/specs/NV/shader_thread_group.txt">(NV_shader_thread_group)</a>. The result would not be frame-coherent, as the work distribution will vary frame to frame. The scene was rendered using many drawcalls, of which several may also be processed in parallel (using NSIGHT you can see some of that drawcall parallelism as well).
<br />
<br />
<center>
<img border="0" src="http://ck.luxinia.de/blog/fermipipeline_distribution.png" />
<br /><br />
</center>
<br />
<h2>Further reading</h2>
Next to the white papers mentioned at the beginning, the article series <a href="https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/">"A trip through the graphics-pipeline"</a> by Fabian Giesen is worth a read and there is also a <a href="http://on-demand.gputechconf.com/gtc/2013/video/S3466-Performance-Optimization-Guidelines-GPU-Architecture-Details.mp4">quite in-depth talk</a> on the details of the memory and instruction processing on the SM by Paulius Micikevicius. <a href="http://graphics.stanford.edu/papers/pomegranate/">Pomegranate: A Fully Scalable Graphics Architecture</a> describes the concept of parallel stages and work distribution between them.<br>
This post here was motivated to help clear up some "serial issues" of version 1.1 of the very nicely-illustrated <a href="http://simonschreibt.de/gat/renderhell/">Render Hell</a> by Simon Trümpler, looking forward to a new revision of that :)
pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-26989156353067906162015-02-05T23:35:00.000+01:002015-02-05T23:36:16.001+01:00New OpenGL samples and techniquesIt's been a while :) Though many cool things happened in the last year. Next to a very exciting roadtrip visiting several national parks in California with friends, I was glad to present at <a href="http://on-demand.gputechconf.com/gtc/2014/presentations/S4385-order-independent-transparency-opengl.pdf">GTC Order Independent Transparency</a> in San Jose and at <a href="http://on-demand.gputechconf.com/siggraph/2014/presentation/SG4117-OpenGL-Scene-Rendering-Techniques.pdf">SIGGRAPH rendering techniques</a> in Vancouver. More recent work has been surfacing lately as well.<br>
<br>
The NV_command_list extension has been <a href="http://www.slideshare.net/tlorach/opengl-nvidia-commandlistapproaching-zerodriveroverhead">disclosed at SIGGRAPH Asia</a>, and I am very happy to work on it with Pierre Boudier and Tristan Lorach.<br>
<br>
Several samples I've worked on can now be found at <a href="https://github.com/nvpro-samples">GitHub</a>. More are to come (oh well readme writing and documentation...).<br>
<br>
<a href="https://github.com/nvpro-samples/gl_commandlist_basic">gl_command_list_basic</a><br>
<br>
<div class="separator" style="clear: both; text-align: center;"><a href="https://github.com/nvpro-samples/gl_commandlist_basic" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://raw.githubusercontent.com/nvpro-samples/gl_commandlist_basic/master/doc/sample.jpg" /></a></div>
<br>
<a href="https://github.com/nvpro-samples/gl_occlusion_culling">gl_occlusion_culling</a><br>
<br>
<div class="separator" style="clear: both; text-align: center;"><a href="https://github.com/nvpro-samples/gl_occlusion_culling" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://raw.githubusercontent.com/nvpro-samples/gl_occlusion_culling/master/doc/frozenculling.jpg" /></a></div>
<br>
<a href="https://github.com/nvpro-samples/gl_cadscene_rendertechniques">gl_cadscene_rendertechniques</a><br>
<br>
<div class="separator" style="clear: both; text-align: center;"><a href="https://github.com/nvpro-samples/gl_cadscene_rendertechniques" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://raw.githubusercontent.com/nvpro-samples/gl_cadscene_rendertechniques/master/doc/sample.jpg" /></a></div>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-46889765177587996692014-01-16T23:04:00.001+01:002015-05-04T21:45:17.142+02:00Alles Im Fluss - open beta :)For the start of the new year I am happy to announce that a long time project (dating back to 2008) finally is ready for public. "Alles Im Fluss" (everything flows), a 3dsmax plugin to aid poly modelling is available for open beta.<br>
<br>
Alles Im Fluss provides the ability to quickly and easily draw polygon strips, connections or extrusions, and cap holes while maintaining clean, mostly quad-based topology.<br>
<br>
One single tool provides you with all functionality depending on the sub-object type you are in, or keyboard modifiers used.<br>
<br>
<img src="http://allesimfluss.biz/imagesHTML/brushconfig2.png" width="630">
<br>
The tool provides you with control to refine the surface flow of connections or caps and replay drawn paths on other geometry.<br>
<br>
<img src="http://allesimfluss.biz/imagesHTML/keyfeatures2.png" width="630">
<br>
Head over to <a href="http://www.allesimfluss.biz/">www.allesimfluss.biz</a> and grab a copy for evaluation (<b>fully-featured</b>)!<br>
Pricing is yet to be determined, however, you can expect it to be the cost of one game.<br>
<br>
Hope to be able to update it for a bit, it's a nice "topic change" from my regular job around graphics programming, back to the artist roots. Next goal is to be able to "pick up" paths from existing geometry and then replay.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-70257569078128724122013-03-30T15:24:00.002+01:002014-02-05T20:11:45.574+01:00Simple GLSL compilation checkerAs NVIDIA's Cgc is getting kinda dated (it is able to compile GLSL as well), threw together a simple commandline tool for basic offline compilation of GLSL shaders. Find it at
<a href="https://github.com/pixeljetstream/glslc">GitHub repository</a>
pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-52042906993647432402012-06-03T21:02:00.000+02:002012-06-03T21:02:16.235+02:00tangent space can cost extra moneyAlthough tangent space normal mapping is used for a while in games now. There is still often one major flaw left in the asset pipeline:
<i>Unsychronized Tangent Space (TS)</i>
<br>
<br>
While TS as such is defined mathematically, and most people end up using similar (but not necessary same definition), it is a <i>per-triangle feature</i>. Therefore,the actual per-vertex storage can vary as well. There is <i>different ways to smooth the vectors to a per-vertex attribute</i>(just like vertex-normal smoothing sometimes may break geometric vertices open for hard-edges). Furthermore there is some typical optimization for actual display, such as reconstructing one of the vectors as cross product from the others, or avoiding per-pixel normalization of the matrix.
<br>
<br>
Major applications such as 3dsmax have suffered from this problem in <a href="http://www.polycount.com/forum/showthread.php?t=68173">past versions</a>, the realtime display was not matched to the baker (only the offline renderer was perfect). Some developers such as id software had tools for this in the doom3 days, or CryTek (who document their tangent-space math quite well on the web). For a lot of other, even big players, there is no public information on the tangent space used in rendering.
<br>
<br>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.luxinia.de/images/tangentspacesync.gif" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="269" width="500" src="http://www.luxinia.de/images/tangentspacesync.gif" /></a></div>
<br>
<br>
<b>This mismatch of "encoder/decoder" costs money.</b> Artists spend extra time fixing interpolation issues, adding geometry, tweaking UV layouts... to get visual acceptable results. And yet often their preview (e.g. inside modeller) might still be "off" in the end (but close enough). As coder I might think "I know my math" but am unaware of the different baking tools and import/export issues. As artist I work with what I was given and am used to "work with limitations". This causes unnecessary frustration and can lead to dispute, if "one" side actually knows better.
<br>
<br>
And knowing better should be no problem today. Popular baking tools, such as <a href="http://www.xnormal.net">xnormal</a>, allow custom tangent space definitions. I've worked on enhancing the 3dsmax pipeline myself. <a href="http://www.polycount.com/forum/showthread.php?t=72861">The 3pointshader</a> fixed the mismatch in old max versions, simply by encoding the "correct" tangent-space (synced to 3dsmax's default baker) as 3 UVW-channels. That way the realtime shader was matched to the baker. Accesing UVW data is also not too hard for import/export. Furthermore 3dsmax allows modifying the bake process through a plugin, and one could use this to use the same UVW-channel trick, or disable per-pixel normalization during baking (sample project with sources <a href="http://www.luxinia.de/download/3pointbake_experimental.zip">here</a>)
<br>
<br>
So please for the sake of saving time (and money) and billions of "my normalmap looks wrong" worries by artists, all sides spend one day to talk it through, educate the artists what "bakers" they can use, educate the coders that their TS choice (all the nitty gritty details) matters for the asset pipeline. It might not have mattered in the bump-map days or when testing simple geometry, but once you bake complex high-res to low-res it does!pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-25449429381681867502011-09-09T08:42:00.014+02:002015-06-27T14:05:14.914+02:00mini lua primer<pre><code>
-------------------------
-- tables are general container can be indexed by anything (numbers,
-- tables, strings, functions...)
local function blah()
print("blah")
end
tab[blah] = blubb
tab.name = blubb -- is same as
tab["name"] = blubb
-- tables are always passed as "pointers/references" never copied
-- array index starts with 1 !!
-- they become garbage collected when not referenced anymore
pos = {1,2,3}
a = { pos = pos }
pos[3] = 4
pos = {1,1,1} -- overwrites local variable pos
a.pos[3] -- is still 4
-------------------------
--[[ multiline
comment ]]
blah = [==[ multiline string and comment
can use multiple = for bracketing to nest ]==]
-------------------------
--- multiple return values allow easy swapping
a,b = b,a
-------------------------
-- object oriented stuff
-- : operator passes first arg
a.func(a,blah) -- is same as
a:func(blah)
-- metatables allow to index class tables
myclass = {}
myclassmeta = {__index = myclass}
function myclass:func()
self --automatic variable through : definiton for the first
--arg passed to func
end
-- above is equivalent to
myclass.func = function (self)
end
object = {}
setmetatable(object,myclassmeta)
object:func() -- is now same as
myclass.func(object)
-- until func gets specialized per object
function object:func()
-- lua will look up first in the object table, then in the metatable
-- it will ony write to the object table
end
-------------------------
--- upvalues for function specialization
function func(obj)
return function ()
return obj * 2
end
end
a = func(1)
b = func(2)
a() -- returns 2
b() -- returns 4
--- non passed function arguments become nil automatically
function func (a,b)
return a,b
end
a,b = func(1) -- b is "nil"
--- variable args
function func(...)
local a,b = ...
--- a,b would be first two args
--- you can also put args in a table
local t = {...}
end
-------------------------
--- conditional assignment chaining
--- 0 is not "false", only "false" or "nil" are
a = 0
b = a or 1 -- b is 0, if a was false/nil it would be 1
c = (a == 0) and b or 2 -- c is 0 (b's value)
-- the first time a value is "valid" (non-false/nil) that value is taken
-- that way you can do default values
function func(a,b)
a = a or 1
b = b or 1
end
-------------------------
--- sandboxing
function sandboxedfunc()
-- after setfenv below we can only call what is enabled in the enviroment
-- so in the example below doing stuff like io.open wouldn't work here
-- blubb becomes created in the current enviornment
blubb = doit()
end
local enva = {
doit = function () return 1 end
}
local envb = {
doit = function () return 2 end
}
setfenv(sandboxedfunc,enva)()
--enva.blubb is now 1
setfenv(sandboxedfunc,envb)()
--envb.blubb is now 2
-- sandboxedfunc could also come from a file, which makes creating fileformats
-- quite easy, as they can internally be lua code
-------------------------
--- functions without () and function chaining
-- to make ini/config files quite easy, lua allows omitting () for function
-- calls when the argument is either a string or a table
function testfunc( a )
end
-- valid calls to above function
testfunc "blah"
testfunc {1,2,3}
-- we can even expand this to create fileformat like structures
function group( name)
return function (content)
local grp = {
name = name,
content = content,
}
return grp
end
end
local grp = group "test" {1,3,5}
-- equvialent to: group("test")({1,3,5})
-- grp.content[2] = would be 3
-- could also build a hierarchy
local grp = group "root" {
group "child a" {},
group "child b" {},
}
-- grp.content[1].name would be "child a"
</code></pre>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-37750227032476243092011-01-01T14:28:00.005+01:002011-01-01T15:48:10.471+01:00estrela as shader editorRecently doing more work with Lua and Cg/GLSL again, hence added a couple features to <a href="http://sourceforge.net/projects/estrelaeditor/">estrela editor</a>.<br /><br />Lua wise I had added some experimental type-guessing mostly meant to aid auto-completion for luxinia classes. Also the lua-apis that get loaded can now be specified by interpreter, so that no luxinia functions get suggested when you are using a "normal" lua interpreter. Getting useful auto-completion and api help is still a big task so. Especially getting user created functions/classes in somehow would be great. Maybe a static tool that generates files from a lua project or so. <br /><br />Most problems with "dynamic" text analysis was that when the user edits old stuff, you have to also somehow check whether keywords were changed, added, removed... hence I kinda avoid that complexity yet. I'd rather prefer a static solution that the user triggers, that way it's hopefully simpler and more robust.<br /><br />Another focus lately was the Cg tool. I've added support for nvShaderPerf and an ARB/NV program beautifier (indenting branches/flow, and inserting comments as to which constants map to what variable). That makes it a bit easier to see what stuff triggers branching and so on.<br />I've also added automatic setting of GLSL input flag for cgc and some automatic defines such as "_VERTEX_"... so that one can use #ifdef _VERTEX_ and still have all GLSL shader code in one file. A GLSL spec and api description is now also part of estrela. I took the nice <a href="http://sourceforge.net/projects/estrelaeditor/">opengl 4.1 quick reference card</a> as base. So much for now.<br /><br />Still haven't found time to push the open-sourcing of luxinia further and add GLSL shader management (but will require ARB_separate_shader) to it for PhD work. But anyway new year now ;)pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-17349398230828649662010-07-05T11:04:00.004+02:002010-07-05T12:27:10.541+02:003point shader<div style="text-align: left;">So long no updates, well mostly I am still working on PhD stuff. Finally the publication on smartvisibility rendering techniques for medical datasets is out</div><div><a href="http://www.springerlink.com/content/94n1840v2602646w">http://www.springerlink.com/content/94n1840v2602646w</a></div><div>And I am mostly working on a CUDA port of vessel histogramm analysis and a bigger system on coronary heart vessel exploration. </div><div><br /></div><div>Furthermore, the 3point shader <a href="http://www.3pointstudios.com/3pointshader_about.shtml">http://www.3pointstudios.com/3pointshader_about.shtml</a> is also released (both commercial and non-commercial free edition). The free edition uses the same plugin and shader, but doesn't have the convenient and time-saving ui, as well as no sample files... that said if you want to play with it, you can do so for free. </div><div><br /></div><div><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><img src="http://www.3pointstudios.com/imgnu/included_thumb.png" border="0" alt="" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 400px; height: 313px; " /></span></div><div><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><br /></span></div><div><br /></div><div>A major contribution in this work is the fix of 3dsmax's broken tangentspace normalmap display in the realtime viewport. They don't send the same tangent space as they do when the scanline baker generates the normalmap. Autodesk was made aware of this problem. The cool thing is that as it simply is a viewport fix, one can get great quality improvement out of all standard bakes. And people no longer have to waste additional geometry to fix the "smoothing" issues the broken viewport had. </div><div>Thing is that many game companies have "taken" the broken viewport for the "correct" tangentspace, which it simply isn't, as Autodesk has several inconsistencies within max SDK for exposing the tangentspace. If you are interested in the fix or how to use it in game engines, you can contact 3pointstudios about it. </div><div><br /></div><div>Another addition to the plugin/shader is mirroring support for object-space normalmaps. I have experimented with that for quite a bit, as well as transforming object-space to tangentspace in offline tools to allow exchanging the baker. Anyway the plugin generates per-vertex reflection vectors for the os-normalmaps, as long as you offset mirrored uv parts by a multiple of 1 (which you would anyway to prevent baking overlaps).</div>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-66612969417893382732009-11-14T16:12:00.003+01:002009-11-14T16:23:44.654+01:00function call highlighting<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://crazybutcher.luxinia.de/code/estrelafncall.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 572px; height: 388px;" src="http://crazybutcher.luxinia.de/code/estrelafncall.png" alt="" border="0" /></a><br />I am quite a visual assist addict, and miss some of its features in other IDEs. For luxinia's Lua and Cg use, I tweak the <a href="http://sourceforge.net/projects/estrelaeditor/"><span style="text-decoration: underline;">estrela editor</span></a> to my own needs. wxWidgets's scintilla version, doesn't allow you to use the style-bits as flexible as I'd love to do. As a result the lexer overwrites the manual changes one does. But with the indicators at least, you can make sure they aren't modified. So the latest addition is function-call highlighting, something I really like in VA.<br /><br />As you might see on the text I am also working on a Lua binding for OpenCL. Whilst I've used manual bindings before, this time I used swig. It needed a few "dirty" hacks and a swigutility library, but now it more or less works fine. Binding, sources and samples will come with a future luxinia release.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-67342468585336903802009-10-14T17:11:00.002+02:002009-10-14T17:13:43.910+02:00the realtime GI courseThorsten just told me about the realtime GI course (Siggraph 2009) being available online<br /><a href="http://www.cs.ucl.ac.uk/staff/j.kautz/RTGICourse/">http://www.cs.ucl.ac.uk/staff/j.kautz/RTGICourse/</a><br /><br />He also hinted me towards the GI compendium<br /><a href="http://www.cs.kuleuven.be/%7Ephil/GI/" target="_blank">http://www.cs.kuleuven.be/~<wbr>phil/GI/</a>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-40171215493074939022009-09-18T13:23:00.009+02:002009-09-18T15:53:29.204+02:00the virtual endoscopy<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://crazybutcher.luxinia.de/wip/endoscopy/20090918131452_00000.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 353px; height: 197px;" src="http://crazybutcher.luxinia.de/wip/endoscopy/20090918131452_00000.jpg" alt="" border="0" /></a><img src="file:///C:/DOKUME%7E1/CHRIST%7E1.OBE/LOKALE%7E1/Temp/moz-screenshot.jpg" alt="" /><br />Here you can see the lung. For a upcoming presentation, I've been messing with some settings with the endoscopy system I've developed a year ago with focus on nasal sinuses, however it works for other hollow organs just fine. My supervisor on this project (diploma thesis) had made contacts with the university clinic in Leipzig and together with a leading ENT-surgeon, the usability of the system was tweaked. At the end it was used in larger clinical study at two locations with some 100+ patients (who prefered virtual pictures over video hehe). Texturing is tri-planar and several post steps are needed to smooth normals and so on (inter-leaved sampling, hitpoint refinement....).<br />More details about implementation can be found in one of the publications around this project:<br /><a href="http://www-e.uni-magdeburg.de/kubisch/vis-1088.pdf">SinusEndoscopy-IEEE Vis 2008 Paper</a>, <a href="http://www-e.uni-magdeburg.de/kubisch/sinusendoscopy-vis08.ppt">Slides</a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://crazybutcher.luxinia.de/wip/endoscopy/wetness4.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 340px; height: 225px;" src="http://crazybutcher.luxinia.de/wip/endoscopy/wetness4.png" alt="" border="0" /></a>Main focus was a surface depiction that is similar to what the surgeons are used too (but not too real, to prevent false impression of too much data security, CT cannot show tissue diseases). The effect is similar to the Cascades-Nvidia demo.<br /><br />And some videos:<br /><a href="http://www-e.uni-magdeburg.de/kubisch/medical/sinusvisdemo.wmv">IEEE Vis 2008 Video</a><br /><a href="http://www-e.uni-magdeburg.de/kubisch/medical/colon.wmv">Colonoscopy</a><br /><a href="http://www-e.uni-magdeburg.de/kubisch/medical/neck.wmv">Neck endoscopy</a>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-5662764900720642672009-08-28T18:45:00.006+02:002009-08-28T19:00:33.479+02:00the simple cuda test<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.luxinia.de/uploads/Tutorials/tut38_t.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="http://www.luxinia.de/uploads/Tutorials/tut38_t.jpg" alt="" border="0" /></a><br />The refactoring and rewriting of luxinia's internals goes on, prior open sourcing the full project (cleaning up code and trying to apply some of what was learned). On the side to have some fun, further dx10 features (ogl equivalents) are added. So more of GL's bufferobjects features are made accessible on the Lua side.<br />A result was a small sample of coding up a lua extension dll, which runs a cuda kernel on a vertexbuffer. I never played with Cuda before, and kinda missed the built-in vector types of Cg, ie that float3*flaot must be done via operator overloading in cuda... means the compiler has to do vectorizing himself (but he can, compared to Cg). I've modified the simpleGL nvidia sample to make use of custom vertex color attribute. It's really nice that cuda supports the datatypes, transform feedback wouldnt allow you to store back to smaller datatypes.<br />Next comes up adding transform feedback, so that you can specify output streams in a shaderpass and streamobject will be a resource like a texture, assignable to material instances. First will only do for nvidia profiles. I still use Cg runtime quite a bit, so doing it for their GLSL profiles is a bit ugly at the moment. Not sure yet if I stick to Cg runtime, or just use it for compilation and do parameter and program creation all myself, would be a bit "what you want is what you get", but also more work now.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-21404230612000648182009-08-11T13:12:00.003+02:002009-08-11T13:16:05.921+02:00the new siggraph stuffjust looking at the various stuff coming from siggraph (check Atom blog for some onsite info)<br /><br />light propagation (first bounce GI) in CryEngine3<br /><a href="http://www.crytek.com/technology/presentations/">http://www.crytek.com/technology/presentations/</a><br /><br />And more <a href="http://forum.beyond3d.com/showpost.php?p=1318747&postcount=1">links from siggraph</a> at beyond3d and another <a href="http://graphics.cs.williams.edu/archive/SweeneyHPG2009/">interesting slides</a> by Epic's Tim Sweeney<br /><br />on a side-note, I did not go for the GC in C, but use a more simple refcounter similar to boost's shared/weak pointers with a dedicated allocator.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-45060193386625418872009-08-08T09:24:00.002+02:002009-08-08T09:31:36.528+02:00the idtech5 virtual texturingjust saw this <a href="http://s09.idav.ucdavis.edu/talks/05-JP_id_Tech_5_Challenges.pdf">http://s09.idav.ucdavis.edu/talks/05-JP_id_Tech_5_Challenges.pdf</a> on gamedev, some slides about the virtual texturing in idtech5 (Rage).<br />Other recent things I stumbled upon were of course the OpenGL 3.2 specs (yeah they finally get some momentum and turned "compatibility" into a profile, not that ugly giant extension). I am currently looking into C hashing libs and a <a href="http://www.hpl.hp.com/personal/Hans_Boehm/gc/">garbagecollector</a> for C. Reason is I want to better split the luxinia GC system from Lua, so that we can move on to using SWIG, ie providing a proper C api (and not just the Lua binding) but keep the reference system working. Smart pointers are not an option as it's too much of an issue to prevent cycles and in general I want to keep the core engine Ansi C.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-51353759541173958372009-07-16T13:13:00.003+02:002009-07-16T13:22:02.922+02:00the good readBruno, a friend of mine, gave me a link to a very nice essay today<br /><a href="http://www.longnow.org/views/essays/articles/ArtFeynman.php">http://www.longnow.org/views/essays/articles/ArtFeynman.php</a><br /><br />It's about the early days of "Thinking Machines", a company that built parallel computers in the early 80s, and how Richard Feynman helped them in an almost fatherly role.<br /><br />And stumbled upon anther creative mind today, <a href="http://www.gagneint.com/">Michel Gagne</a>. Very cool illustrations and the latest game based on his universe looks fantastic.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-70189171739384536442009-07-11T14:49:00.005+02:002009-07-19T18:24:33.208+02:00the 3dsmax sdk frustration3dsmax sdk gives me so much love... not. Today found out (the hard way) that their BitArray.EnumSet function is broken in 64 bit...<br /><br />on a sidenote the SDK states if you want something you draw in viewport to have a fixed nonscaling size, to use<br /><blockquote><span style="font-size:85%;">vpt->GetVPWorldWidth(wpt)/360.0f;</span></blockquote>however this isnt correct, you want to use<br /><blockquote><span style="font-size:85%;">aspect = ((float)gw->getWinSizeX()/(float)gw->getWinSizeY());<br />aspect = max(aspect,1.0f);<br /><br />(vpt->GetVPWorldWidth(wpt)*aspect)/(float)gw->getWinSizeX();</span></blockquote>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-79605782994976958582009-06-30T13:33:00.016+02:002010-06-18T11:14:24.274+02:00the jump pageI will update this post, with useful links.<br /><br />best "state of the art" stuff, mostly by top companies, latest tech (starcraft2, crysis..)<br /><b>SIGGraph course on real-time rendering</b><br />(2006) <a href="http://developer.amd.com/media/gpu_assets/Course_26_SIGGRAPH_2006.pdf" target="_blank">http://developer.amd.com/media/gpu_assets/Course_26_SIGGRAPH_2006.pdf</a><br />(2007) <a href="http://ati.amd.com/developer/SIGGRAPH07/Course28-Advanced_Real-Time_Rendering_in_3D_Graphics_and_Games_SIGGRAPH07.pdf" target="_blank">http://ati.amd.com/developer/SIGGRAPH07/Course28-Advanced_Real-Time_Rendering_in_3D_Graphics_and_Games_SIGGRAPH07.pdf</a><br />(2008) <a href="http://ati.amd.com/developer/SIGGRAPH08/Siggraph2008-Advances_in_Real-Time_Rendering_Course.pdf" target="_blank">http://ati.amd.com/developer/SIGGRAPH08/Siggraph2008-Advances_in_Real-Time_Rendering_Course.pdf</a><br /><a href="http://www.codersnotes.com/notes/papers-please" target="_blank">http://www.codersnotes.com/notes/papers-please</a> (overview on classic graphics papers)<br />(2009) more to come: <a href="http://www.bungie.net/News/content.aspx?type=topnews&link=Siggraph_09">http://www.bungie.net/News/content.aspx?type=topnews&link=Siggraph_09</a><br /><br />Siggraph 2008 GI course: <a href="http://www.graphics.cornell.edu/~jaroslav/papers/2008-irradiance_caching_class/index.htm">http://www.graphics.cornell.edu/~jaroslav/papers/2008-irradiance_caching_class/index.htm</a><br /><br />and because I mostly just grab links from these:<br /><b>company publication sites</b><br />(pixar) <a href="http://graphics.pixar.com/research/" target="_blank">http://graphics.pixar.com/research/</a> (the mecca)<br />(guerilla) <a href="http://www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf" target="_blank">http://www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf</a> (killzone 2 tech overview)<br />(valve) <a href="http://www.valvesoftware.com/publications.html" target="_blank">http://www.valvesoftware.com/publications.html</a> (valve has lots of papers on their tech, really good stuff)<br />(bungie) <a href="http://www.bungie.net/Inside/publications.aspx" target="_blank">http://www.bungie.net/Inside/publications.aspx</a> (also spilled the important beans similar to valve)<br />(insomniac) <a href="http://www.insomniacgames.com/tech/techpage.php" target="_blank">http://www.insomniacgames.com/tech/techpage.php</a> (more ps3/code oriented)<br />(epic) <a href="http://unrealtechnology.com/whats-new.php?ref=downloads" target="_blank">http://unrealtechnology.com/whats-new.php?ref=downloads</a><br />(nvidia) <a href="http://developer.nvidia.com/page/documentation.html" target="_blank">http://developer.nvidia.com/page/documentation.html</a><br />(ati) <a href="http://ati.amd.com/developer/techreports.html" target="_blank">http://ati.amd.com/developer/techreports.html</a><br />(crytek) <a href="http://www.crytek.com/technology/presentations/" target="_blank">http://www.crytek.com/technology/presentations/</a><br /><br /><b>hair</b><br /><a href="http://graphics.stanford.edu/papers/hair/hair-sg03final.pdf" target="_blank">http://graphics.stanford.edu/papers/hair/hair-sg03final.pdf</a><br /><a href="http://ati.amd.com/developer/gdc/scheuermann_hairrendering.pdf" target="_blank">http://ati.amd.com/developer/gdc/scheuermann_hairrendering.pdf</a> (real-time variant)<br /><br /><b>shading</b><br /><a href="http://www.valvesoftware.com/publications/2006/SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf" target="_blank">http://www.valvesoftware.com/publications/2006/SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf</a><br /><a href="http://www.bungie.net/images/Inside/publications/presentations/lighting_material.zip" target="_blank">http://www.bungie.net/images/Inside/publications/presentations/lighting_material.zip</a><br /><a href="http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/phd/weyrich-2006-phd-lowres.pdf" target="_blank">http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/phd/weyrich-2006-phd-lowres.pdf</a><br /><br /><b>compression</b><br /><a href="http://developer.nvidia.com/object/real-time-normal-map-dxt-compression.html" target="_blank">http://developer.nvidia.com/object/real-time-normal-map-dxt-compression.html</a><br /><br /><b>meshing / deformation</b><br /><a href="http://www2.in.tu-clausthal.de/~hormann/parameterization/index.html" target="_blank">http://www2.in.tu-clausthal.de/~hormann/parameterization/index.html</a><br /><a href="http://graphics.uni-bielefeld.de/publications/papers/" target="_blank">http://graphics.uni-bielefeld.de/publications/papers/</a><br /><a href="http://www.cs.nyu.edu/~sorkine/" target="_blank">http://www.cs.nyu.edu/~sorkine/</a><br /><br /><b>filter/image</b><br /><a href="http://people.csail.mit.edu/sparis/bf_course/" target="_blank">http://people.csail.mit.edu/sparis/bf_course/</a><br /><a href="http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/xlrcam/kim09xlrcam-lowres.pdf" target="_blank">http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/xlrcam/kim09xlrcam-lowres.pdf</a><br /><br /><b>architecture</b><br /><a href="http://beautifulpixels.blogspot.com/2008/08/multi-platform-multi-core-architecture.html" target="_blank">http://beautifulpixels.blogspot.com/2008/08/multi-platform-multi-core-architecture.html</a><br />nice post by a gamebryo dev on the different architectures.<br /><a href="http://www.crytek.com/fileadmin/user_upload/inside/presentations/2009/A_bit_more_deferred_-_CryEngine3.ppt" target="_blank">http://www.crytek.com/fileadmin/user_upload/inside/presentations/2009/A_bit_more_deferred_-_CryEngine3.ppt</a><br /><br /><span style="font-weight: bold;">raytracing</span><br /><a href="http://ompf.org/forum/viewtopic.php?f=3&t=9">http://ompf.org/forum/viewtopic.php?f=3&t=9</a><br /><br />mostly code related, sometimes however hints at what tech is to come in future<br /><b>individuals / blogs</b><br />timothy farrar (human head): <a href="http://farrarfocus.blogspot.com/" target="_blank">http://farrarfocus.blogspot.com/</a> <a href="http://www.farrarfocus.com/atom/" target="_blank">http://www.farrarfocus.com/atom/</a><br />ignacio castano (nvidia): <a href="http://castano.ludicon.com/blog/" target="_blank">http://castano.ludicon.com/blog/</a><br />wolfgang engel (former rockstar): <a href="http://diaryofagraphicsprogrammer.blogspot.com/" target="_blank">http://diaryofagraphicsprogrammer.blogspot.com/</a><br />tom forsyth (former radgametools, now intel): <a href="http://home.comcast.net/~tom_forsyth/blog.wiki.html" target="_blank">http://home.comcast.net/~tom_forsyth/blog.wiki.html</a><br />Iñigo Quílez (demoscene): <a href="http://iquilezles.org/www/" target="_blank">http://iquilezles.org/www/</a><br />Kun Zhou (researcher): <a href="http://www.kunzhou.net/" target="_blank">http://www.kunzhou.net</a><br />Rui Wang (researcher): <a href="http://www.cs.umass.edu/~ruiwang/#publications">http://www.cs.umass.edu/~ruiwang/#publications</a><br />Carsten Dachsbacher (researcher): <a href="http://www.vis.uni-stuttgart.de/~dachsbcn/publications.html">http://www.vis.uni-stuttgart.de/~dachsbcn/publications.html</a><br />Szirmay-Kalos László (researcher): <a href="http://www.iit.bme.hu/~szirmay/puba.html">http://www.iit.bme.hu/~szirmay/puba.html</a><br /><a href="http://erdani.org/publications/">http://erdani.org/publications/</a><br />Marc Stamminger (researcher): <a href="http://www9.informatik.uni-erlangen.de/people/publishedby/marc/stamminger/">http://www9.informatik.uni-erlangen.de/people/publishedby/marc/stamminger/</a><br />computer graphics at williams college: <a href="http://graphics.cs.williams.edu/papers/">http://graphics.cs.williams.edu/papers/</a><br />Jiaping Wang(researcher) <a href="http://www.lightthoughts.com/jpwang/">http://www.lightthoughts.com/jpwang/</a><br />icare3d (researcher): <a href="http://www.icare3d.org/blog_techno/">http://www.icare3d.org/blog_techno/</a><br />Jaroslav Krivanek (researcher) <a href="http://www.graphics.cornell.edu/~jaroslav/"> http://www.graphics.cornell.edu/~jaroslav/</a>pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-34771897974557539602009-06-14T17:17:00.000+02:002009-06-16T21:02:19.270+02:00the lowpoly past - part II - freelancingjust coming home from a gorgeous once in a lifetime lovely fairytale wedding of my cousin. First some more background info.<br /><br />During studies I did a little bit of freelancing for game art. Thanks to a colleague from the first mod I worked on (TerrorQuake2), I made contact with Lemsko, a German 3d artist and virtual aviation enthusiast. With his support I did work for IEN on warbirds2 and their tank game armored assault... yeah back then I was into military models quite a bit.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiMFHdwcdP945g1mqEnoenPqZNRREqqmvpz-0KjFGF1AH60GUbQnHBy7tmFzyy37G0A1KCZnKar6dnH9LSp3RYQUeYUsQ0D7c2Z7Mk1on3jg5oMBafhFLJaFU85iEtLpMk70STtYHMckWT/s1600-h/il2-sturmovik-2.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiMFHdwcdP945g1mqEnoenPqZNRREqqmvpz-0KjFGF1AH60GUbQnHBy7tmFzyy37G0A1KCZnKar6dnH9LSp3RYQUeYUsQ0D7c2Z7Mk1on3jg5oMBafhFLJaFU85iEtLpMk70STtYHMckWT/s320/il2-sturmovik-2.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001530430830226" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiTpZd_mqa9jTOzly9LEqUba-AitFPbWvGkCN5jzA-WW3tX1_HwNG1xfOrUL7CTcujOBH2BohaYFL53fAm4_HbVj_onR9Z6ToSHMLVsEajAtDqsjm3QpNnIG7vYJSwavkvxss4x__G1SJL/s1600-h/il2-sturmovik-1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiTpZd_mqa9jTOzly9LEqUba-AitFPbWvGkCN5jzA-WW3tX1_HwNG1xfOrUL7CTcujOBH2BohaYFL53fAm4_HbVj_onR9Z6ToSHMLVsEajAtDqsjm3QpNnIG7vYJSwavkvxss4x__G1SJL/s320/il2-sturmovik-1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001528523464818" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0H_S-1Cp6oKDjK8Gayi97bRrZSiZUrUFlUuS7pLTcyDMJ4eFFhBbuxlLKZiQtsgPkrq7UXUOcol99SljT2FUNGoukj1t7l7oCtZ9j3AI6KuHumd_tjSbbNF6UyudcntIJvLlMn_5Ehrv5/s1600-h/il2dawn04.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 146px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0H_S-1Cp6oKDjK8Gayi97bRrZSiZUrUFlUuS7pLTcyDMJ4eFFhBbuxlLKZiQtsgPkrq7UXUOcol99SljT2FUNGoukj1t7l7oCtZ9j3AI6KuHumd_tjSbbNF6UyudcntIJvLlMn_5Ehrv5/s320/il2dawn04.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001524534695954" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqW_LWidk5iV0KrLlnAJ4HbkvnFnPWv0e2s4FSkmZmIhxay4XIKT9TaJFXqz44YQIKJa6rqqbB4Qg_AaYd2W2M1fOOO64HrmeK4WaL6sxCp2oEDYE0haBs2tWvXtA8nwPcpvKfASJ2Nq2i/s1600-h/shermanf03.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqW_LWidk5iV0KrLlnAJ4HbkvnFnPWv0e2s4FSkmZmIhxay4XIKT9TaJFXqz44YQIKJa6rqqbB4Qg_AaYd2W2M1fOOO64HrmeK4WaL6sxCp2oEDYE0haBs2tWvXtA8nwPcpvKfASJ2Nq2i/s320/shermanf03.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001868709457234" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPz0KlejobxIsl2EX1TpBxX5Sy5WJEmcZeRN5U6vTaPdyiGY3kUW0b8WDBFnMFjV5xDGx-rfoK_X0-jugT0aebVaSFUv9eJgj7XcoMsKreltErVPqGDErVf4qPkGvsbqOLhOU-1BVHrO6l/s1600-h/shermanf02.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPz0KlejobxIsl2EX1TpBxX5Sy5WJEmcZeRN5U6vTaPdyiGY3kUW0b8WDBFnMFjV5xDGx-rfoK_X0-jugT0aebVaSFUv9eJgj7XcoMsKreltErVPqGDErVf4qPkGvsbqOLhOU-1BVHrO6l/s320/shermanf02.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001865005015218" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUU85KPSaLY8kI06n78CtjFNOjdNBKoA345W4ADyJowiq2a5FRGP1EvC07VhbdDWLEACWE5Vw8gbnSDK9iZTYY0dytj8qB7TbiWhk6mEgaz94WsVmWvwmPW4ok50n78Dfdhn2ppiFU3838/s1600-h/shermanf01.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUU85KPSaLY8kI06n78CtjFNOjdNBKoA345W4ADyJowiq2a5FRGP1EvC07VhbdDWLEACWE5Vw8gbnSDK9iZTYY0dytj8qB7TbiWhk6mEgaz94WsVmWvwmPW4ok50n78Dfdhn2ppiFU3838/s320/shermanf01.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001861013747234" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbq5op5jWVZUG7qSG3P_xfdgUSCTJqtM2FM7z8WOnhlrUYPtnxRZdsez8oZAF7lKxGOfHvHU9aAckniVHcRiQh5k3fSUV_ak8b-bb5fo7OKqsFtHl3rstMEq5IC1J9cYbqWUsF8cTsgKYl/s1600-h/sherman_aa_sell.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 171px; height: 250px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbq5op5jWVZUG7qSG3P_xfdgUSCTJqtM2FM7z8WOnhlrUYPtnxRZdsez8oZAF7lKxGOfHvHU9aAckniVHcRiQh5k3fSUV_ak8b-bb5fo7OKqsFtHl3rstMEq5IC1J9cYbqWUsF8cTsgKYl/s320/sherman_aa_sell.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348001859080613090" /></a><br /><br />For a fantasy competitive jump'n run quake mod following animations were created. I've also done a bit of character work for that project, which died soon.<br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/FGFAZwsumyg&hl=en&fs=1&color1=0x3a3a3a&color2=0x999999"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/FGFAZwsumyg&hl=en&fs=1&color1=0x3a3a3a&color2=0x999999" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />The earlier military model work lead thru freelance work for esimgames.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyDWRSKjQgxnimOv9l-pBA_Wu1fBJo3mqrNEMLic5-8Mu-z6w3P_Ef5XklYlhlf_Dk0Mh_3pweTwpSZA9SCix0l_ace5_P-7VvQa79q6x8ncVWAsxeOug0JUIXz2zIAb-2oqOBkj27Lmoz/s1600-h/ch47_01.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 207px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyDWRSKjQgxnimOv9l-pBA_Wu1fBJo3mqrNEMLic5-8Mu-z6w3P_Ef5XklYlhlf_Dk0Mh_3pweTwpSZA9SCix0l_ace5_P-7VvQa79q6x8ncVWAsxeOug0JUIXz2zIAb-2oqOBkj27Lmoz/s320/ch47_01.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348002668905690706" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikwkKpZusSsSjqNsN5gO-lO6KU8506Ox04ucayd3qjSMfu1MWtyEn0F1pO32Sh_qWVg5ZTvnkgjV43VzMDoYBN4GUqmYoFx6ul04KMlJzhOHJPCDXEy2RoVwTvGVIEjL0p37cRjDEygG5w/s1600-h/mi8_08.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 226px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikwkKpZusSsSjqNsN5gO-lO6KU8506Ox04ucayd3qjSMfu1MWtyEn0F1pO32Sh_qWVg5ZTvnkgjV43VzMDoYBN4GUqmYoFx6ul04KMlJzhOHJPCDXEy2RoVwTvGVIEjL0p37cRjDEygG5w/s320/mi8_08.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5348002666034025602" /></a><br /><br />As my studies required lots of time, I only did very little freelance work basically to keep the 3dsmax license floating ;)pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.comtag:blogger.com,1999:blog-3966084362499107503.post-35247530966198169282009-06-11T13:53:00.000+02:002009-06-14T17:18:25.365+02:00the lowpoly past - part I - quake daysFrom early on I enjoyed watching movies a lot, especially the technical effects have always fascinated me. Seeing Star Wars for the first time, I was amazed at how convincing the scenery was set; the world seemed so real. After watching the Terminator, Abyss and other early CGI movies, this new technique of special effects got my attention.Finally, Jurassic Park in 1993, assured me that I want to do that in future, I want to create my own dinosaurs and bring my own worlds to life... This is why I got into 3D-modelling and animation.<br /><br />CGI knows almost no limits, the creative mind can fulfill its dreams to all extents. This fascinated me, and still does. I enjoyed playing with LEGO's a lot, and once I moved from the floor to a computer desk, it was finally possible to bring things to life on the screen. With a 3D studio r2 dos student version I made my first steps in 3d. Having no internet and no tutorials I had to bite thru the complexity of this program myself. Once I got the hang of it, I wanted to do big stuff immediately, but the projects I set for myself were always too big, or too time consuming; I preferred playing games a lot more. Another reason was that I didn't really want to model all the things necessary, I wanted to animate it... so no project was finished, but I gained some experience on timing, camera movement, etc...<br /><br />Later on I united my love for 3d with my love for computer games. After getting into the internet I searched for better software, tutorials and of course games. The Quake2 modification community caught my attention. There I found other young developers who create their own games, and of course need 3d art aswell, models of weapons, characters... My first job was being a modeller for Terror Quake 2 around late 1998. After the team was restructured I also took over the job as animator. Since this team, which later changed name to TeamHavoc, absorbed me I have been working on mods for quite some time. I was lead animator for a Quake3 mod, called <a href="http://www.urbanterror.net/" target="_blank">Urban Terror</a>. Thru this team's skill and good connections to id software, I managed to get a license of 3dsmax at quakecon 2000, thanks again to my teammates from <a href="http://www.silicon-ice.com/" target="_blank">Silicon Ice Development</a>, to <a href="http://www.idsoftware.com/" target="_blank">id software</a> and to <a href="http://www.discreet.com/" target="_blank">discreet</a> for giving away some copies to young developers.<br /><br />Here is some work for urbanterror (mostly animations, but also some weapon models):<br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/caluArpgpKk&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/caluArpgpKk&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/uPAkBdM3yl8&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/uPAkBdM3yl8&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br /><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/qrpWnliolEY&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/qrpWnliolEY&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br /><br />With starting my studies of "computational visualistics" at the Otto-von-Guericke University of Magdeburg in 2002 I left the development team of urbanterror before beta3 release and got more into coding myself.pixeljetstreamhttp://www.blogger.com/profile/12290547417993234263noreply@blogger.com