원본 사이트 : http://www.bluesnews.com/abrash/contents.shtml

Sorted Spans in Action


by Michael Abrash


Last time, we dove headlong into the intricacies of hidden surface removal by way of z-sorted (actually, 1/z-sorted) spans. At the end, I noted that we were currently using 1/z-sorted spans in Quake, but it was unclear whether we’d switch back to BSP order. Well, it’s clear now: We’re back to sorting spans by BSP order.

In Robert A. Heinlein’s wonderful story “The Man Who Sold the Moon,” the chief engineer of the Moon rocket project tries to figure out how to get a payload of three astronauts to the Moon and back. He starts out with a four-stage rocket design, but finds that it won’t do the job, so he adds a fifth stage. The fifth stage helps, but not quite enough, “Because,” he explains, “I’ve had to add in too much dead weight, that’s why.” (The dead weight is the control and safety equipment that goes with the fifth stage.) He then tries adding yet another stage, only to find that the sixth stage actually results in a net slowdown. In the end, he has to give up on the three-person design and build a one-person spacecraft instead.

1/z-sorted spans in Quake turned out pretty much the same way, as we’ll see in a moment. First, though, I’d like to note up front that this column is very technical and builds heavily on previously-covered material; reading the last column is strongly recommended, and reading the six columns before that, which cover BSP trees, 3-D clipping, and 3-D math, might be a good idea as well. I regret that I can’t make this column stand completely on its own, but the truth is that commercial-quality 3-D graphics programming requires vastly more knowledge and code than did the 2-D graphics I’ve written about in years past. And make no mistake about it, this is commercial quality stuff; in fact, the code in this column uses the same sorting technique as the test version of Quake, qtest1.zip, that we just last week placed on the Internet. These columns are the Real McCoy, reports from the leading edge, and I trust that you’ll be patient if careful rereading and some catch-up reading of prior columns are required to absorb everything contained herein. Besides, the ultimate reference for any design is working code, which you’ll find in part in Listing 1 and in its entirety in ftp.idsoftware.com/mikeab/ddjzsort.zip.

Quake and sorted spans

As you’ll recall from last time, Quake uses sorted spans to get zero overdraw while rendering the world, thereby both improving overall performance and leveling frame rates by speeding up scenes that would otherwise experience heavy overdraw. Our original design used spans sorted by BSP order; because we traverse the world BSP tree from front to back relative to the viewpoint, the order in which BSP nodes are visited is a guaranteed front to back sorting order. We simply gave each node a increasing BSP sequence number as it was visited, set each polygon’s sort key to the BSP sequence number of the node (BSP splitting plane) it lay on, and used those sort keys when generating spans.

(In a change from earlier designs, polygons now are stored on nodes, rather than leaves, which are the convex subspaces carved out by the BSP tree. Visits to potentially-visible leaves are used only to mark that the polygons that touch those leaves are visible and need to be drawn, and each marked-visible polygon is then drawn after everything in front of its node has been drawn. This results in less BSP splitting of polygons, which is A Good Thing, as explained below.)

This worked flawlessly for the world, but had a couple of downsides. First, it didn’t address the issue of sorting small, moving BSP models such as doors; those models could be clipped into the world BSP tree’s leaves and assigned sort keys corresponding to the leaves into which they fell, but there was still the question of how to sort multiple BSP models in the same world leaf against each other. Second, strict BSP order requires that polygons be split so that every polygon falls entirely within a single leaf. This can be stretched by putting polygons on nodes, allowing for larger polygons on average, but even then, polygons still need to be split so that every polygon falls within the bounding volume for the node on which it lies. The end result, in either case, is more and smaller polygons than if BSP order weren’t used--and that, in turn, means lower performance, because more polygons must be clipped, transformed, and projected, more sorting must be done, and more spans must be drawn.

We figured that if only we could avoid those BSP splits, Quake would get a lot faster. Accordingly, we switched from sorting on BSP order to sorting on 1/z, and left our polygons unsplit. Things did get faster at first, but not as much as we had expected, for two reasons.

First, as the world BSP tree is descended, we clip each node’s bounding box in turn to see if it’s inside or outside each plane of the view frustum. The clipping results can be remembered, and often allow the avoidance of some or all clipping for the node’s polygons. For example, all polygons in a node that has a trivially accepted bounding box are likewise guaranteed to be unclipped and in the frustum, since they all lie within the node’s volume, and need no further clipping. This efficient clipping mechanism vanished as soon as we stepped out of BSP order, because a polygon was no longer necessarily confined to its node’s volume.

Second, sorting on 1/z isn’t as cheap as sorting on BSP order, because floating-point calculations and comparisons are involved, rather than integer compares. So Quake got faster, but, like Heinlein’s fifth rocket stage, there was clear evidence of diminishing returns.

That wasn’t the bad part; after all, even a small speed increase is a good thing. The real problem was that our initial 1/z sorting proved to be unreliable. We first ran into problems when two forward-facing polygons started at a common edge, because it was hard to tell which one was really in front (as discussed below), and we had to do additional floating-point calculations to resolve these cases. This fixed the problems for a while, but then odd cases started popping up where just the right combination of polygon alignments caused new sorting errors. We tinkered with those too, adding more code and incurring additional slowdowns in the process. Finally, we had everything working smoothly again, although by this point Quake was back to pretty much the same speed it had been with BSP sorting.

And then yet another crop of sorting errors popped up.

We could have fixed those errors too; we’ll take a quick look at how to deal with such cases shortly. However, like the sixth rocket stage, the fixes would have made Quake slower than it had been with BSP sorting. So we gave up and went back to BSP order, and now the code is simpler and sorting works reliably. It’s too bad our experiment didn’t work out, but it wasn’t wasted time, because we learned quite a bit. In particular, we learned that the information provided by a simple, reliable world ordering mechanism such as a BSP tree can do more good than is immediately apparent, in terms of both performance and solid code.

Nonetheless, sorting on 1/z can be a valuable tool, used in the right context; drawing a Quake world just doesn’t happen to be such a case. In fact, sorting on 1/z is how we’re now handling the sorting of multiple BSP models that lie within the same world leaf in Quake; here we don’t have the option of using BSP order (because we’re drawing multiple independent trees), so we’ve set restrictions on the BSP models to avoid running into the types of 1/z sorting errors we encountered drawing the Quake world. Below, we’ll look at another application in which sorting on 1/z is quite useful, one where objects move freely through space. As is so often the case in 3-D, there is no one ”right” technique, but rather a great many different techniques, each one handy in the right situations. Often, a combination of techniques is beneficial, as for example the combination in Quake of BSP sorting for the world and 1/z sorting for BSP models in the same world leaf.

For the remainder of this column, I’m going to look at the three main types of 1/z span sorting, then discuss a sample 3-D app built around 1/z span sorting.

Types of 1/z span sorting

As a quick refresher, with 1/z span sorting, all the polygons in a scene are treated as sets of screenspace pixel spans, and 1/z (where z is distance from the viewpoint in viewspace, as measured along the viewplane normal) is used to sort the spans so that the nearest span overlapping each pixel is drawn. As discussed last time, in the sample program we’re actually going to do all our sorting with polygon edges, which represent spans in an implicit form.

There are three types of 1/z span sorting, each requiring a different implementation. In order of increasing speed and decreasing complexity, they are: intersecting, abutting, and independent. (These are names of my own devising; I haven’t come across any standard nomenclature.)

Intersecting span sorting

Intersecting span sorting occurs when polygons can interpenetrate. Thus, two spans may cross such that part of each span is visible, in which case the spans have to be split and drawn appropriately, as shown in Figure 1.







Figure 1: Intersecting span sorting. Polygons A and B are viewed from above.

 


Intersecting is the slowest and most complicated type of span sorting, because it is necessary to compare 1/z values at two points in order to detect interpenetration, and additional work must be done to split the spans as necessary. Thus, although intersecting span sorting certainly works, it’s not the first choice for performance.

Abutting span sorting

Abutting span sorting occurs when polygons that are not part of a continuous surface can butt up against each other, but don’t interpenetrate, as shown in Figure 2. This is the sorting used in Quake, where objects like doors often abut walls and floors, and turns out to be more complicated than you might think. The problem is that when an abutting polygon starts on a given scan line, as with polygon B in Figure 2, it starts at exactly the same 1/z value as the polygon it abuts, in this case, polygon A, so additional sorting is needed when these ties happen. Of course, the two-point sorting used for intersecting polygons would work, but we’d like to find something faster.







Figure 2: Abutting span sorting. Polygons A and B are viewed from above.

As it turns out, the additional sorting for abutting polygons is actually quite simple; whichever polygon has a greater 1/z gradient with respect to screen x (that is, whichever polygon is heading fastest toward the viewer along the scan line) is the front one. The hard part is identifying when ties--that is, abutting polygons--occur; due to floating-point imprecision, as well as fixed-point edge-stepping imprecision that can move an edge slightly on the screen, calculations of 1/z from the combination of screen coordinates and 1/z gradients (as discussed last time) can be slightly off, so most tie cases will show up as near matches, not exact matches. This imprecision makes it necessary to perform two comparisons, one with an adjust-up by a small epsilon and one with an adjust-down, creating a range in which near-matches are considered matches. Fine-tuning this epsilon to catch all ties without falsely reporting close-but-not-abutting edges as ties proved to be troublesome in Quake, and the epsilon calculations and extra comparisons slowed things down.

I do think that abutting 1/z span sorting could have been made reliable enough for production use in Quake, were it not that we share edges between adjacent polygons in Quake, so that the world is a large polygon mesh. When a polygon ends and is followed by an adjacent polygon that shares the edge that just ended, we simply assume that the adjacent polygon sorts relative to other active polygons in the same place as the one that ended (because the mesh is continuous and there’s no interpenetration), rather than doing a 1/z sort from scratch. This speeds things up by saving a lot of sorting, but it means that if there is a sorting error, a whole string of adjacent polygons can be sorting incorrectly, pulled in by the one missorted polygon. Missorting is a very real hazard when a polygon is very nearly perpendicular to the screen, so that the 1/z calculations push the limits of numeric precision, especially in single-precision floating point.

Many caching schemes are possible with abutting span sorting, because any given pair of polygons, being noninterpenetrating, will sort in the same order throughout a scene. However, in Quake at least, the benefits of caching sort results were outweighed by the additional overhead of maintaining the caching information, and every caching variant we tried actually slowed Quake down.

Independent span sorting

Finally, we come to independent span sorting, the simplest and fastest of the three, and the type the sample code in Listing 1 uses. Here, polygons never intersect or touch any other polygons except adjacent polygons with which they form a continuous mesh. This means that when a polygon starts on a scan line, a single 1/z comparison between that polygon and the polygons it overlaps on the screen is guaranteed to produce correct sorting, with no extra calculations or tricky cases to worry about.

Independent span sorting is ideal for scenes with lots of moving objects that never actually touch each other, such as a space battle. Next, we’ll look at an implementation of independent 1/z span sorting.

1/z span sorting in action

Listing 1 is a portion of a program that demonstrates independent 1/z span sorting. This program is based on the sample 3-D clipping program from the March column; however, the earlier program did hidden surface removal (HSR) by simply z-sorting whole objects and drawing them back to front, while Listing 1 draws all polygons by way of a 1/z-sorted edge list. Consequently, where the earlier program worked only so long as object centers correctly described sorting order, Listing 1 works properly for all combinations of non-intersecting and non-abutting polygons. In particular, Listing 1 correctly handles concave polyhedra; a new L-shaped object (the data for which is not included in Listing 1) has been added to the sample program to illustrate this capability. The ability to handle complex shapes makes Listing 1 vastly more useful for real-world applications than the earlier 3-D clipping demo.

By the same token, Listing 1 is quite a bit more complicated than the earlier code. The earlier code’s HSR consisted of a z-sort of objects, followed by the drawing of the objects in back-to-front order, one polygon at a time. Apart from the simple object sorter, all that was needed was backface culling and a polygon rasterizer.

Listing 1 replaces this simple pipeline with a three-stage HSR process. After backface culling, the edges of each of the polygons in the scene are added to the global edge list, by way of AddPolygonEdges(). After all edges have been added, the edges are turned into spans by ScanEdges(), with each pixel on the screen being covered by one and only one span (that is, there’s no overdraw). Once all the spans have been generated, they’re drawn by DrawSpans(), and rasterization is complete.

There’s nothing tricky about AddPolygonEdges(), and DrawSpans(), as implemented in Listing 1, is very straightforward as well. In an implementation that supported texture mapping, however, all the spans wouldn’t be put on one global span list and drawn at once, as is done in Listing 1, because that would result in drawing spans from all the surfaces in no particular order. (A surface is a drawing object that’s originally described by a polygon, but in ScanEdges() there is no polygon in the classic sense of a set of vertices bounding an area, but rather just a set of edges and a surface that describes how to draw the spans outlined by those edges.) That would mean constantly skipping from one texture to another, which in turn would hurt processor cache coherency a great deal, and would also incur considerable overhead in setting up gradient and perspective calculations each time a surface was drawn. In Quake, we have a linked list of spans hanging off each surface, and draw all the spans for one surface before moving on to the next surface.

The core of Listing 1, and the most complex aspect of 1/z-sorted spans, is ScanEdges(), where the global edge list is converted into a set of spans describing the nearest surface at each pixel. This process is actually pretty simple, though, if you think of it as follows.

For each scan line, there is a set of active edges, those edges that intersect the scan line. A good part of ScanEdges() is dedicated to adding any edges that first appear on the current scan line (scan lines are processed from the top scan line on the screen to the bottom), removing edges that reach their bottom on the current scan line, and x-sorting the active edges so that the active edges for the next scan can be processed from left to right. All this is per-scan-line maintenance, and is basically just linked list insertion, deletion, and sorting.

The heart of the action is the loop in ScanEdges() that processes the edges on the current scan line from left to right, generating spans as needed. The best way to think of this loop is as a surface event processor, where each edge is an event with an associated surface. Each leading edge is an event marking the start of its surface on that scan line; if the surface is nearer than the current nearest surface, then a span ends for the nearest surface, and a span starts for the new surface. Each trailing edge is an event marking the end of its surface; if its surface is currently nearest, then a span ends for that surface, and a span starts for the next-nearest surface (the surface with the next-largest 1/z at the coordinate where the edge intersects the scan line). One handy aspect of this event-oriented processing is that leading and trailing edges do not need to be explicitly paired, because they are implicitly paired by pointing to the same surface. This saves the memory and time that would otherwise be needed to track edge pairs.

One more element is required in order for ScanEdges() to work efficiently. Each time a leading or trailing edge occurs, it must be determined whether its surface is nearest (at a larger 1/z value than any currently active surface); in addition, for leading edges, the currently topmost surface must be known, and for trailing edges, it may be necessary to know the currently next-to-topmost surface. The easiest way to accomplish this is with a surface stack; that is, a linked list of all currently active surfaces, starting with the nearest surface and progressing toward the farthest surface, which, as described below, is always the background surface. (The operation of this sort of edge event-based stack was described and illustrated in the May column.) Each leading edge causes its surface to be 1/z-sorted into the surface stack, with a span emitted if necessary. Each trailing edge causes its surface to be removed from the surface stack, again with a span emitted if necessary. As you can see from Listing 1, it takes a fair bit of code to implement this, but all that’s really going on is a surface stack driven by edge events.

Implementation notes

Finally, a few notes on Listing 1. First, you’ll notice that although we clip all polygons to the view frustum in worldspace, we nonetheless later clamp them to valid screen coordinates before adding them to the edge list. This catches any cases where arithmetic imprecision results in clipped polygon vertices that are a bit outside the frustum. I’ve only found such imprecision to be significant at very small z distances, so clamping would probably be unnecessary if there were a near clip plane, and might not even be needed in Listing 1, because of the slight nudge inward that we give the frustum planes, as described in the March column. However, my experience has consistently been that relying on worldspace or viewspace clipping to produce valid screen coordinates 100 percent of the time leads to sporadic and hard-to-debug errors.

There is no separate clear of the background in Listing 1. Instead, a special background surface at an effectively infinite distance is added, so whenever no polygons are active the background color is drawn. If desired, it’s a simple matter to flag the background surface and draw the background specially. For example, the background could be drawn as a starfield or a cloudy sky.

The edge-processing code in Listing 1 is fully capable of handling concave polygons as easily as convex polygons, and can handle an arbitrary number of vertices per polygon, as well. One change is need needed for the latter case: Storage for the maximum number of vertices per polygon must be allocated in the polygon structures. In a fully polished implementation, vertices would be linked together or pointed to, and would be allocated dynamically from a vertex pool, so each polygon wouldn’t have to contain enough space for the maximum possible number of vertices.

Each surface has a field named state, which is incremented when a leading edge for that surface is encountered, and decremented when a trailing edge is reached. A surface is activated by a leading edge only if state increments to 1, and is deactivated by a trailing edge only if state decrements to 0. This is another guard against arithmetic problems, in this case quantization during the conversion of vertex coordinates from floating point to fixed point. Due to this conversion, it is possible, although rare, for a polygon that is viewed nearly edge-on to have a trailing edge that occurs slightly before the corresponding leading edge, and the span-generation code will behave badly if it tries to emit a span for a surface that hasn’t started yet. It would help performance if this sort of fix-up could be eliminated by careful arithmetic, but I haven’t yet found a way to do so for 1/z-sorted spans.

Lastly, as discussed last time, Listing 1 uses the gradients for 1/z with respect to changes in screen x and y to calculate 1/z for active surfaces each time a leading edge needs to be sorted into the surface stack. The natural origin for gradient calculations is the center of the screen, which is (x,y) coordinate (0,0) in viewspace. However, when the gradients are calculated in AddPolygonEdges(), the origin value is calculated at the upper left corner of the screen. This is done so that screen x and y coordinates can be used directly to calculate 1/z, with no need to adjust the coordinates to be relative to the center of the screen. Also, the screen gradients grow more extreme as a polygon is viewed closer to edge-on. In order to keep the gradient calculations from becoming meaningless or generating errors, a small epsilon is applied to backface culling, so that polygons that are very nearly edge-on are culled. This calculation would be more accurate if it were based directly on the viewing angle, rather than on the dot product of a viewing ray to the polygon with the polygon normal, but that would require a square root, and in my experience the epsilon used in Listing 1 works fine.

Bretton Wade’s BSP Web page has moved

A while back, I mentioned that Bretton Wade was constructing a promising Web site on BSPs. He has moved that site, which has grown to contain a lot of useful information, to http://www.qualia.com/bspfaq/; alternatively, mail bspfaq@qualia.com with a subject line of “help”

출처 블로그 > 화니의 블로그
원본 http://blog.naver.com/hslson/20248478

렌 더웨어는 역사가 매우 길다. 80년대부터 존재했으며 오랫동안 수 많은 게임 타이틀에 채용된 엔진이며 업계의 표준으로 불리고 있었지만 최근에는 제작사인 Criterion이 EA에 흡수되었고 또 언리얼 엔진이라는 강력한 경쟁자가 뛰어난 기술과 장점을 바탕으로 엔진의 성능과 명성, 그리고 실제로 영역을 넓혀감으로서 렌더웨어의 위상이 많이 위축되었다. 렌더웨어는 게임 개발에 도움이 되기 위한 각 부분의 기술적인 라이브러리이다. 그래픽 엔진, 사운드 엔진, 인공지능 엔진, 물리 엔진이 있으며 각각의 엔진들은 모두 따로 라이센스 가능하며 게임 개발중에 기술적인 부분을 추가하기 위한 라이브러리이며 에디터는 제공하지 않는다.

1. 렌더웨어 그래픽스

렌더웨어 그래픽스는 콘솔 게임기기와 PC에 사용가능한 그래픽 기술 기반의 라이브러리와 툴킷을 제공한다.

2. 렌더웨어 오디오

렌더웨어 오디오는 다채널 오디오 지원과 제작을 도와줄 라이브러리와 툴킷을 제공한다.

3. 렌더웨어 A.I.

렌더웨어 A.I.는 개발자가 임의 설정이 가능한 인공지능 시스템을 제공한다. 툴킷과 소스 코드가 제공된다.

4. 렌더웨어 피직스

렌더웨어 피직스는 HavoK Game Dynamics처럼 다양한 물리 시스템을 제공하는 물리 엔진이다. 툴킷과 함께 소스 코드가 제공된다.

5. 렌더웨어 스튜디오

렌더웨어 스튜디오는 게임 개발의 바탕이 되며 프로그램 리소스 매니지먼트를 위한 엔진 프레임워크이며 맵 에디터를 포함한다. 그리고 기본적으로 렌더웨어 그래픽스, 렌더웨어 오디오, 렌더웨어 A.I., 렌더웨어 피직스를 모두 포함하며 FPS를 만들 경우엔 FPS 장르팩을 탑재할 수 있다. 하지만 렌더웨어 스튜디오는 제한적인 범위와 부족한 확장성으로 많이 사용되지는 않는 실정이다. 렌더웨어 그래픽스를 가장 많이 사용한다.

출처 블로그 > 화니의 블로그
원본 http://blog.naver.com/hslson/20248504

주 피터 엔진은 리스텍 엔진이 그 전신으로 이름이 바뀐 것이다. 원래 처음에는 Direct Engine이라는 이름을 가지고 있었으나 MS의 API인 DirectX와 혼동된다는 이유로 Lithtech(제작사인 Monolith의 lith와 Technology의 Tech를 합성한 이름)이라고 이름을 바꾸고 여러 게임들에 채용된바 있으나 후에 다시 Jupiter라는 이름으로 바뀌었다.
현재는 Monolith와 독립된 Touchdown Entertainment라는 엔진 전문 개발사명을 따로 가지고 있지만 Monolith와 다른 스튜디오는 아니며 개발사명만 따로 가지고 있는 것이다.

1. 렌더링
기본적으로 Direct3D 9를 기반으로 한 버텍스 쉐이더와 픽셀 쉐이더를 지원하며 범프 매핑, 알파 블렌딩, 환경 매핑을 포함한다. 실내 환경에 최적화 된 BSP를 기반으로 월드 렌더링을 구현하며 야외 지형 구현 시스템은 없지만 예전의 언리얼1에서 구현했던 BSP의 변형으로 야외 지형을 어느정도 구현한다. TRON 2.0에서 사용 된 GLOW 효과를? 효율적으로 처리해주며 메모리 사용을 최소화하는 데칼 시스템과 불, 물, 연기, 폭발, 비, 눈 등의 다양한 파티클 구현과 3D 형태의 볼류메트릭 파티클을 지원한다.
그리고 애니메이트 된 텍스쳐 프로젝터 그림자를 지원한다
모델링과 애니메이션은 정점 애니메이션과 골격 구조 애니메이션을 지원하며 사실적인 감정 표현을 위한 페이셜 에니메이션과 립싱크 애니메이션을 지원한다.
모든 렌더링 코드가 완전히 Direct3D에 기반으로 작성되어서 Direct3D를 사용하는 Windows와 Xbox만 지원하며 OpenGL로 이식은 거의 불가능하다.

2. 통합된 기술들

인공지능
간단한 인공지능 시스템의 기반이 잡혀 있으며 변행해서 사용할 수 있다.

게임 오브젝트 매니저
텍스트 기반으로 모든 오브젝트를 관리할 수 있는 시스템이 엔진에 융합되어 있다.

물리
아주 기본적인 물리 효과들만 포함하고 있다.

네트워킹
32명의 플레이어들에 기반한 FPS에 최적화 된 네트워크와 유닉스 기반의 서버를 지원한다. 네트워크 코드가 엔진에 융합되어 있어서 MMORPG를 위해 네트워크 코드를 수정 할려면 엄청난 애로사항이 있어서 국내 MMORPG 사용에 도입 됐다가 무참하게 혹평을 받은 바 있다.

3. 개발 툴

DEdit
레벨 에디팅 툴이다. 건물을 배치하고 광원과 텍스처를 적용하고 오브젝트와 아이템을 배치할 수 있다.

RenderStyle Editor
범프 매핑이나 기타 텍스처 매핑 기술로 다양한 특수 효과를 만드는 툴이다.

ModelEdit
간단하게 캐릭터 모델과 게임상 오브젝트를 편집할 수 있는 툴이다. 물론 맥스나 마야같은 전문적인 툴과는 비교하기 힘들지만 간단한 메쉬 편집이나 애니메이션 편집은 가능하다.

FxEd
게임상에서 사용되는 파티클들을 만드는 툴이다.

Command Editor
게임상의 모든 오브젝트들을 텍스트 기반으로 관리할 수 있는 툴이다.

주피터 엔진의 최신버전으로 Jupiter Extended가 릴리즈 되었다. 이 기술은 Monolith의 F.E.A.R.와 Condemned에 사용된 바 있으며 기본적으로 주피터 엔진을 기반으로하는 엔진이지만 렌더러가 완전히 새롭게 쓰였으며 HavoK Game Dynamics 물리 시스템이 융합되었고 새로운 기술을 활용하기 위해 툴이 약간 향상되었다.

1. 렌더링

새로운 Jupiter Extended의 렌더러는 Direct3D 9에 기반한 HLSL 쉐이더 시스템으로 모든 렌더링이 이루어지고 텍스처 매핑은 Normal Mapping, Specular 등의 여러 다중 텍스처 매핑을 이용하며 광원효과는 블린 퐁의 알고리즘에 기반한 퍼픽셀 라이팅을 Diffuse와 Specular 같은 모든 것에 채용되는 차세대 렌더링을 추구하고 있다. Emissive를 이용한 Glow 효과와 스텐실 버퍼를 이용한 볼륨 쉐도우, 그리고 멀티 샘플링을 이용해 볼륨 쉐도우를 흐릿하게 처리해주는 소프트 쉐도우를 지원한다.
모델링과 애니메이션에서는 스키닝 애니메이션이 추가 되었다.
Jupiter Extended의 렌더러도 완전히 Direct3D에 기반해 제작됐기 때문에 OpenGL로의 이식은 거의 불가능에 가깝다. 물론 완전히 불가능한 것은 아니다.
2. HavoK Physics
HavoK Game Dynamics가 기본적으로 엔진에 융합되어 있다. HavoK Game Dynamics의 기본적인 모든 기능들과 툴을 가지고 있으며 기본적으로 융합되어 있기 때문에 따로 연결 작업을 할 필요가 없다.

3. 개발 툴

World Edit
이름은 다르지만 DEdit와 똑같으며 새로운 렌더링의 특징과 HavoK 물리 시스템을 사용하기 위해 업그레이드 된 형태이다.

Model Edit
이것 역시 기존의 툴과 같으며 새로운 렌더링의 특징과 HavoK 물리 시스템을 위한 업데이트가 이루어졌다.

나머지 툴은 주피터 엔진과 모두 같으며 약간의 버전업이 이루어졌다. 그리고 맥스와 마야용 플러그인으로 사용 가능한 익스포터가 추가되었다.

출처 블로그 > 화니의 블로그
원본 http://blog.naver.com/hslson/20248491

게 임브리오는 원래 Netimmerse라는 이름을 가지고 있던 엔진이지만 이후에 GameBryo라는 이름으로 바뀌었다. 다수의 게임에 채용되었으며 Dark Age of Camelot에 채용된 엔진으로 유명하다. 게임브리오는 엔진 프레임워크를 가지고 있지 않으며 순수한 그래픽 렌더링 라이브러리이며 에디터도 제공되지 않는다. 말 그대로 순수한 그래픽 엔진일뿐이며 다른 기능은 포함하고 있지 않다. 게임브리오로 게임을 제작하려면 모든 프로그래밍은 직접 해서 만들어야하며 게임브리오는 단지 그래픽 기술부분의 라이브러리로 삽입할 수 있는 그래픽 엔진일뿐이다.

1. 엔진

게임브리오는 C++로 작성되었으며 OpenGL과 Direct3D를 API로 지원하여 렌더링하는 코드를 제공한다. OpenGL과 Direct3D를 사용하는 플랫폼이라면 어느 플랫폼에도 사용하다. 즉, Xbox나 플레이 스테이션처럼 콘솔 게임기에도 사용 가능하다.

현재 라이센스 가능한 버전은 두가지로 게임브리오 1.2와 게임브리오 2.0이 있다. 1.2버전은 노말맵을 사용하지 않는 세대의 렌더링을 구현하고 2.0버전은 노말매핑을 비롯한 차세대 렌더링 기술에 필요한 HDR 렌더링, 전화면 픽셀 쉐이더 효과, HLSL 기반의 쉐이더 시스템을 추구한다.
최근 게임브리오를 가장 잘 사용한 게임인 Elder Scroll?IV : Oblivion에서 차세대 렌더링 기술의 진가를 확인할 수 있다.

2. 툴

게임브리오는 어떤 에디터도 제공하지 않으며 맥스와 마야의 플러그인 컴포넌트와 맥스와 마야의 장면 뷰어와 애니메이션 툴을 제공한다. 애니메이션 툴도 작업은 불가능하며 단순히 프리뷰 기능만 제공된다.

출처 블로그 > 화니의 블로그
원본 http://blog.naver.com/hslson/20248493

언리얼 엔진은 처음 나왔을 때부터 특이하고도 매우 뛰어난 구조적 형태를 갖추고 있는 엔진으로 유명하다. 현재에도 가장 뛰어난 엔진으로 평가 받고 있으며 앞으로의 발전 전망도 매우 좋다고 평가 되고 있다.

1. Unreal Virtual Machine

언리얼 엔진의 가장 큰 특징이자 장점이라면 언리얼 가상 머신이라는 소프트웨어적인 가상의 유동적 환경을 만들어내고서 그 환경을 통해 엔진 내부 및 외부의 모든 것들을 역동적으로 이어준다는 점이다.

다른 엔진들도 가상 머신 개념을 채용한 엔진들이 존재하지만 언리얼 엔진의 언리얼 가상 머신 개념은 다른 엔진들에서 말하는 가상 머신과 기본 개념부터가 조금 다르다.

Unreal Virtual Machine은 일종의 소프트웨어 가상 머신으로 기계처럼 완전한 구조를 가지고 있으면서도 그 구조 자체도 유동적인 환경을 만들고 있는 가상 환경 상태이다. 이것은 그 자체로도 매우 유용한 쓰임새를 가지고 있으며 기본적으로도 매우 큰 이점을 제공하지만 프로그램의 덩치가 점점 더 커지면 커질수록 장점을 발하게 된다.

언리얼 엔진의 핵심인 언리얼 가상 머신을 이루는 구조는 다음과 같은 3가지로 구성 되어 있다.

Core : 코어는 UnrealScript의 핵심이며 모듈화 및 컴포넌트화와 자체 확장성의 기본이 되며 엔진과 에디터를 완벽하게 연동 시켜 완전한 가상 환경의 바탕을 만들어 준다.
Engine : 이 엔진은 Unreal Engine 자체를 칭하는 것이 아니다. 엔진은 내부 및 외부의 모듈 및 컴포넌트와 모든 기능적인 것들을 연동 시키는 작업을 해 준다.
Editor : 이 에디터는 UnrealEd를 칭하는 것이 아니다. 에디터는 내부적, 외부적, 자체적인 모든 변형 및 확장을 엔진 내/외부의 모든 것이 완벽히 연동 되면서 가능하게 해 준다.

Core, Engine, Editor 이렇게 3가지는 UnrealScript라는 순수한 객체지향적 프로그래밍 개발 언어 개념으로 3가지가 일체가 되어 Unreal Virtual Machine을 형성하고 있으며 특정 플랫폼, 특정 운영체제, 특정 프로그래밍 언어에 구속 받지 않는다. 개발 언어에 구속 받지 않기 때문에 어떠한 언어와도 함께 사용이 가능하다. C, C++, C#, Cobol, Fortlan, Visual Basic, Lua같은 컴파일 언어, Java같은 인터프리터 언어, Assembly나 기계어와 함께 사용해서 연동 시키는게 가능하다. Unreal Virtual Machine은 언리얼 엔진 자체이며 내부의 모든 기능들과 외부의 모든 모듈 및 컴포넌트 기능들은 Unreal Virtual Machine을 통해 서로 완벽하게 연동 된다. 그리고 이 언리얼 가상 머신 및 모든 구조적 형태 자체와 UnrealScript도 제한적이지 않고무한한 확장이 가능 한 형태로 만들어져 있다.
그리고 외부의 모듈은 UnrealScript Package로 Unreal Virtual Machine과 외부 언어의 모듈이나 프로그램이나 모든 어플리케이션을 연동 해주는 작업을 수행한다.
언리얼 가상 머신은 언리얼 엔진의 핵심이며 다른 모든 프로그램 구조는 임의적으로 변경하거나 완전히 새롭게 프로그래밍 설계가 가능하다.

2. 완전한 객체 지향 설계 구조

언리얼 엔진의 작동은 내부적으로 짜여진 설계 및 외부 어플리케이션이나 모듈 및 컴포넌트가 Unreal Virtual Machine에 연결되어 작동시에 잘 연동된다.
언리얼 엔진의 세부 구성 요소는 다음과 같다.

DLL : 윈도우용의 실질적인 외부 모듈이다. 컴포넌트도 별 다른 작업이 필요 없이 곧 바로 사용 가능하다.

INT : 유니코드로 정보를 가지고 있다. 다개국어의 유니코드를 지원하며 다중으로 지원 할 경우 임의적으로 확장명을 지정 할 수 있으며 기존의 유니코드 수정이나 새로운 유니코드를 추가가 가능 하다. 언리얼 엔진 2.5 이상의 기본 지원 유니코드와 확장명은 다음과 같다. English International(INT), French(FRT), German(DET), Italian(ITT), Spanish(SPT), Espanola(EST), Japanese(JPT), Chinese(CHT), Korean(KOT)

INI : 구성 설정 정보를 가지고 있다. 외부 모듈이나 컴포넌트나 자체 기능의 구성 설정을 저장한다.

기타 다른 형태의 DAT나 여러 종류의 데이터베이스나 자체 개발 데이터베이스도 연동 가능하다.

3. UnrealEd

Unreal Virtual Machine을 통해 엔진을 통한 모든 부분과 긴밀하게 연결되어 있다. UnrealEd는 그 자체만으로도 매우 강력하고도 방대한 실시간 3D 그래픽 디자인 통합 개발 툴이지만 여기서 정말 중요한 점은 이렇게 방대하고 복잡한 툴이 언리얼 엔진에 완벽하게 매칭되어 있어서 UnrealEd에서 디자인한 모습 그대로 Texture, Lighting, 지형적 구조, 객체 배치, 각종 특수 효과, 스크립트 시스템 등 UnrealEd에서 작업한 모든 모습 그대로 게임에서 보이는 환경과 디자인을 얻을 수 있으며 UnrealEd 내부에서 유동적으로 연결되어 있는 다양한 툴들은 외부 프로그래밍 추가 작업을 제외한 언리얼 엔진의 모든 작업들을 UnrealEd 내에서 실시간으로 구현 가능하게 해주며 실시간으로 구현하면서 추가나 수정 등의 여러 작업을 하는것이 가능하며 이 역시 Unreal Virtual Machine을 통해 엔진의 내부 기능 및 외부 모듈 기능들과 완벽하게 연동되며 기타 툴이나 어플리케이션을 직접 개발해서 UnrealEd 내부의 확장으로 활용하는 것도 가능하다는 점이 매우 놀랍고도 뛰어난 점이다. 개발자들의 성향에 따라서 UnrealEd와 연동되는 외부 어플리케이션이나 완전히 외부 어플리케이션으로 만들어서 Unreal Virtual Machine을 통해 직접 연동되게 만드는 것도 가능하다.
그리고 UnrealEd 자체도 다른 언어로 이식하거나 재작성 할 수 있으며 기타 툴이나 어플리케이션도 개발 언어에 상관 없이 연동 할 수 있다. 예로 UnrealEd 1.0은 Visual Basic으로 작성 되었으며 UnrealEd 2.0 및 3.0은 C++로 작성 되었으며 언리얼 엔진을 라이센스 하여 사용하는 제작사 중에서는 C#으로 재 작성 하거나 새로운 툴과 어플리케이션을 C++나 C# 또는 다른 언어로 개발하여 연동 하기도 한다.

언리얼 엔진 2.5의 기본적인 UnrealEd 툴들은 다음과 같이 구성되어 있다.

Actor Properties : Actor를 Visuc Basic Tool처럼 비주얼 기반의 GUI 기반의 에디터로 작성하는 것이 가능하다.
Surface Properties : Brush나 Static Meshes의 속성값을 직접 수정 할 수 있게 해 주는 툴이다.
Level Properties : Level 전체에 걸쳐서 속성값을 수정 할 수 있게 해 주는 툴이다.
2D Shape Editor : UnrealEd 내에서 Brush를 직접 만들 수 있는 툴이다. 2D 맵 형태로 간소화 해 용량을 최대로 줄인 후 파일을 저장 해뒀다가 불러와서 3D 폴리곤으로 변환 시킬 수 있다.
UnrealScript Editor : Actor Classes의 소스 코드를 작성하는 편집 기능과 함께 곧 바로 컴파일 할 수 있는 조건이 마련되어 있다.
Actor Browser : 언리얼 엔진의 모든 Actor Classes 목록을 GUI 베이스로 열람 할 수 있으며 사용자가 직접 제작하거나 외부의 Actor Classes를 불러와서 사용 할 수도 있다.
Group Browser : 개별 Actor를 하나의 그룹으로 관리할 수 있게 해준다. 복잡한 레벨을 제작할 때 그룹화를 통하여 각 Actor를 체계적으로 분류 할 수 있으며 목적에 따라서 간단하게 선택하여 수정할 수 있다.
Music Browser : 음악 파일을 불러와서 직접 들을 수 있고 리스트를 작성하거나 레벨의 어느 부분에서 어떤 음악이 쓰이고 어느 부분만큼 또는 특정 상황에서 음악이 어떻게 쓰일지에 대한 유동적인 음악 시스템을 관리할 수 있다. 언리얼 엔진 2.5의 기본 지원 파일은 모듈 음악 포맷인 UMX(mod, s3m, xsm등)을 지원하고 OGG 포맷을 지원하며 인터넷을 통한 실시간 디지털 믹스를 지원한다. MP3나 WAV 또는 기타 오디오 포맷, 각종 Midi 포맷, CD Audio나 DVD Audio등의 Digital Data Audio 포맷을 지원 가능하게 하려면 각 포맷에 대한 모듈이나 컴포넌트를 간단히 삽입하여 추가로 사용할 수 있다.
Sound Browser : 사운드를 불러와서 직접 들을 수 있고 자신만의 패키지를 만들어서 가져올 수도 있으며 bit rate, 주파수 설정, 채널 설정을 직접 변경하여 곧 바로 적용 할수 있다. 3D 위치와 거리, 스테레오, 도플러 효과, 서라운드 사운드 및 다채널 사운드등의 모든 기능들은 Unreal Audio System에 연결되어 Unreal Virtual Machine과 연동하며 UnrealScript로 설계자가 모든 부분을 쉽게 수정하거나 새롭게 만들 수 있다.
Texture Browser : Texture의 패키지를 불러오거나 편집할 수 있고 직접 패키지를 제작할 수도 있다. 각 Texture는 각각의 종류별로 구분되며 기본적인 구성은 Texture, Shader, Midflier, Combiner, Final Blend등으로 되어 있으며 Bump Map이나 Nomral Map, Mask Map, Phong Map, Displacement Map처럼 엔진의 렌더링 부분에서 추가 기능을 도입했을 경우 Texture Browser의 Texture 종류를 각 기능에 맞게 추가하거나 수정할 수 있다. 실시간으로 Shader나 기타 기능이 들어간 Texture를 Texture Browser 내에서 직접 확인 해 볼수 있으며 단순히 Texture를 선택하고 확인하는 기능이 아닌 편리하고 유용한 다양한 기능들을 지원한다.
Meshes Browser : 동적 메쉬들을 불러와서 직접 재생해볼 수 있다. 특정 프레임을 체크할 수 있으며 수정도 가능하다.
Prefabs Browser : Actor와 Brush 패키지를 만들고 제작에 활용할 수 있도록 해준다. Actor와 Brush와 기타 다른 몇 가지의 개발 파일이 조립되는 정보를 파일로 저장 해놓고 다른 레벨에서도 불러와서 쉽게 적용할 수 있다. 여기서 제작된 패키지는 개별 Actor로 제작한 것과 똑같은 효과를 얻게 되며 성능, 기능상으로 아무런 차이점이 없다. 차이점은 개별 Actor로 직접 모든 것을 제작하게 되는 것에 비해 많은 시간과 노력을 절감할 수 있다는 점이다.
Static Meshes Browser : 정적 메쉬들을 불러와서 직접 확인 가능하며 레벨의 특정 부분이나 움직이지 않는 부분의 모든 Brush를 정적 메쉬로 변환이 가능하다. 정적 메쉬는 언리얼 엔진이 2.0으로 업데이트 될 때 새롭게 추가 된 뛰어난 기능으로 메쉬 연산을 처리하는 부분인데 모양이 변하지 Brush들을 간략화 된 데이터 파일로 저장 가능하며 실시간으로 사용할 시에 정보를 간소화 한 함수 형태로 저장한 후에 동일 계열의 다각형들의 정보를 정리해서 최적화해서 메모리에 상주하는 용량과 실시간으로 그려지는 연산을 최소화하고서도 실제 다각형 모양이나 위치나 충돌 체크 정보 등, 메쉬의 모든 정보들을 그대로 가지고 있으며 기능적으로는 일반 Brush와 아무런 차이가 없다. 단지 실시간으로 사용되는 많은 폴리곤의 연산량을 줄이고 메모리를 절약하는 최적화 기술일 뿐이기 때문에 폴리곤이 적은 형태의 레벨에서는 쓰나 쓰지 않으나 별 차이가 없지만 많은 폴리곤을 사용해야 하는 세밀한 모습을 보여주는 레벨이거나 복잡한 구조물을 가지고 있어서 매우 많은 폴리곤의 연산이 필요하게 된 레벨들일 경우에 그 차이는 눈에 띄게 두드러진다. 요즘 같은 하드웨어에서도 폴리곤 연산을 최대한 줄이고 하드웨어의 성능을 Texture 기술과? Shader 기술 및 기타 특수효과에 투자하는 것이 중요한 만큼 이 기능의 뛰어남은 더욱 부각 될 것이다.? 애니메이션에서도 직접 메쉬의 모양이 변하지 않는 애니메이션이라면 정적 메쉬를 이용하여 다수의 정적 메쉬를 연결 한 후에 애니메이션으로 사용할 수 있다.
Animation Browser : 애니메이션을 직접 불러와서 재생할 수 있으며 정점 애니메이션과 골격 구조 애니메이션, 스키닝 애니메이션 등의 다양한 애니메이션 속성들을 보여주며 애니메이션 정보나 키프레임 정보, 골격 구조 정보, 모션 캡춰 정보, 스키닝 정보등 다양한 정보들을 불러온 후에 수정하거나 직접 작성 할 수도 있다.
Terrain? Editing :하이트맵을 기반으로 Displacement Mapping을 이용해 지형을 생성하고 다양한 기능으로 지형을 수정하거나 지형에 관련된 여러가지 편집을 할 수 있다.
Matinee : 컷씬을 편리하게 제작할 수 있게 하기 위해 유용한 여러가지 기능들을 제공한다. 카메라, 객체 위치 이동, 씬 그래프 및 여러 기능들로 마치 영화를 제작하듯이 컷씬 장면들을 제작할 수 있다.
Particles : 불, 물, 폭포수, 먼지, 연기, 비, 눈, 연기 등의 다양한 파티클 효과를 2D 형태 또는 3D의 볼륨감 있는 파티클을 입체적으로 편리하게 구현하여 제작할 수 있는 기능들을 제공한다.

이 외에도 외부의 다른 툴이나 직접 제작한 툴을 연계 시키거나 UnrealEd 내부로 연동하거나 UnrealEd와 상관없이 언리얼 가상 머신으로만 연결할 수도 있다.
언리얼 엔진 3.0 부터는 이런 작업이 더욱 쉽게 가능하도록 구조적으로 더 강력한 모듈화가 되었다고 한다.
그 리고 언리얼 엔진 3.0에선 게임 프로그래밍이 거의 필요 없는 플로우챠트를 그리듯이 프로그래밍이 가능한 비주얼 프로그래밍 시스템과 비주얼 쉐이더 시스템등 비주얼 관련 기능 강화와 다른 부분도 업데이트 되고 실시간으로 레벨 플레이를 하면서 에디팅 할 수 있는 부분이 강화 되었다고 한다.

4. 구성 모듈

언리얼 엔진 2.5의 기본 플랫폼 및 운영체제 모듈은 다음과 같이 구성되어 있다.

WinDrv : 윈도우용 모듈과 엔진을 연동 해 주는 드라이버
LinuxDrv : 리눅스용 모듈과 엔진을 연동 해 주는 드라이버
MacDrv : 매킨토시용 모듈과 엔진을 연동 해 주는 드라이버
XboxDrv : Xbox용 모듈과 엔진을 연동 해 주는 드라이버
Ps2Drv : PlayStation2용 모듈과 엔진을 연동 해 주는 드라이버
GcDrv : GameCube용 모듈과 엔진을 연동 해 주는 드라이버

Window : 엔진의 다른 모든 모듈이 Unreal Virtual Machine을 통해 윈도우 환경에서 작동하게 해 주는 모듈
Linux : 엔진의 다른 모든 모듈이 Unreal Virtual Machine을 통해 리눅스 환경에서 작동하게 해 주는 모듈
Mac : 엔진의 다른 모든 모듈이 Unreal Virtual Machine을 통해 매킨토시 환경에서 작동하게 해 주는 모듈
Xbox : 엔진의 다른 모든 모듈이 Unreal Virtual Machine을 통해 Xbox 환경에서 작동하게 해 주는 모듈
Ps2 : 엔진의 다른 모든 모듈이 Unreal Virtual Machine을 통해 PlayStation2 환경에서 작동하게 해 주는 모듈
Gc : 엔진의 다른 모든 모듈이 Unreal Virtual Machine을 통해 GameCube 환경에서 작동하게 해 주는 모듈

언리얼 엔진 2.0까지는 32비트만 지원 했지만, 언리얼 엔진 2.5부터는 64비트 윈도우, 64비트 리눅스도 지원 가능하게 업데이트 되었다.
이 외에도 변경하거나 새로운 운영체제나 플랫폼을 위한 모듈을 추가로 개발하여 연동 할 수 있다. 차세대 콘솔이나 PSP 같은 모바일 게임기로 언리얼 엔진을 사용하는 업체들도 있다.

렌더링 : 언리얼 엔진 2.5의 기본 렌더링 드라이버 컴포넌트는 다음과 같이 구성되어 있다.

D3DDrv : Direct 3D의 기능을 엔진에 연동 해 주는 컴포넌트 드라이버
OpenGLDrv : OpenGL의 기능을 엔진에 연동 해 주는 컴포넌트 드라이버
PixoMaticDrv : PixoMatic 소프트웨어 렌더러를 엔진에 연동 해 주는 컴포넌트 드라이버

렌더링 드라이버는 Unreal Virtual Machine에 연동되어 UnrealScript에 의해 조정될 수 있게 되어 설계자가 특수 드라이버를 지원하기 위해서 자체 개발을 하여 엔진에 연동 시킬수 있다.
언 리얼 엔진 2.5의 기본적인 Direct3D 드라이버는 Direct3D 8 기반의 렌더러이며 OpenGL 드라이버도 Direct3D 8 기반과 동급의 기술만을 지원하지만 모든 부분에 모듈화가 잘 되어 있기 때문에 Direct3D 9 기반의 Direct3D 드라이버나 OpenGL 드라이버로 업그레이드 하거나 새로운 특징들을 가진 기술들의 추가가 쉽게 가능하다. Unreal Developer Network에는 노멀맵, 퍼픽셀 라이팅 같은 차세대 그래픽 기술들을 쉽게 추가 할 수 있는 기반들을 마련해주고 있다.
UDN을 통한 커스터마이징이나 기타 추가 개량 없이 기본적으로만 제공되는 언리얼 엔진 2.5의 렌더링 기술들로는 다음과 같다.

월드 렌더링
실내 지역을 구성하는 가장 기본적인 방법인 BSP를 지원하며 위의 에디터 기능중에 설명한 스태틱 메시를 지원해서 고정적인 물체에 사용한다. 움직이는 물체에는 정점 기반의 버텍스 애니메이션과 골격기반의 본 애니메이션을 지원하며 두 사이를 부드럽게 이어주는 스키닝 애니메이션도 지원한다. 지형에는 높이맵 기반으로 고도필드를 생성해내거나 UnrealEd에서 실시간으로 수정이 가능한 형태의 지형을 지원한다.
그리고 고급수준의 포탈 렌더링 시스템을 지원한다. 전면 반사 및 반면 반사되는 물질과 유사한 표면 구현과 기하학의 법칙을 무시한 한 공간안에 다른 지역을 보여주는 워프 포탈 효과등을 구현 가능하며 하늘과 배경을 함께 독립적으로 평행 이동과 회전이 가능한 시스템을 구현 가능하다.
실제 보이는 표면만 연산하고 안보이는 곳은 연산에서 제외하여 고성능을 내는 렌더링 시스템과 실내 환경과 야회 환경을 동시에 구현하면서 두가지 렌더링 방식의 경계가 없이 자연스럽게 연결해주며 이러한 기능들을 UnrealEd 내에서 실시간으로 에디팅 할 수 있게 해준다.

캐릭터 애니메이션
정점 단위로 4개의 골격과 폴리곤 수와 hard-coded에 제한이 없으며 골격 계층 구조 애니메이션과 게임 환경 안에서의 복잡한 애니메이션과 캐릭터 애니메이션의 연결 부위를 매끈하게 연결 해주는 기능을 가지고 있다. 그리고 다중 채널 애니메이션, 애니메이션과 함께 형체가 변할 수 있는 요소, 플레이어 컨트롤 입력 반응 애니메이션과 동시에 다중 애니메이션 허용이 가능하며 완전히 독립적인 유동체의 프레임 레이트를 위한 골격 키프레임 보간법, 관절의 연속적 애니메이션이 부드럽게 흐르도록 만들어주는 기법, 그리고 아티스트가 메모리 사용률, 압축률등을 확인하면서 실시간으로 작업할 수 있는 시스템을 제공한다.

라이팅
모든 기하학적 형태에 정점 광원으로 생성되는 동적 광원을 지원하며 투영 텍스처링을 지원한다. 모든 물체와 표면에 정적 광원이나 동적 광원을 효율적인 방법을 찾아서 적용할 수 있으며 필요에 따라서는 둘 다 적용도 가능하다. 미리 계산 된 RGB 컬러의 라이트맵과 정점 광원 채널의 임의적 선택을 이용한 고품질의 광원을 지원하며 지향성 광원, 점 광원, 반점 광원, 래디오시티 광원을 지원한다. 모든 표면에 대해 상황에 따른 복잡한 동적 광원과 그림자에 투영된 텍스쳐 허용, 그리고 플레이어 그림자와 플래시 라이트를 위한 광원을 지원하며 깜빡 거리거나 흔들리는 동적 광원 등의 효과를 다수로 지원한다.

특수 효과
파티클 효과로 2D 형태의 파티클과 3D 입체 형태의 볼륨 파티클로 불, 물방울, 폭포수, 먼지, 연기 등을 쉽게 구현 할 수 있는 코드가 기본적으로 제공되며 날씨효과를 위한 비, 눈, 우박 등의 구현을 위한 파티클의 효과 코드도 제공된다. 모든 파티클은 UnrealEd에서 실시간으로 편집할 수 있다.
안개효과는 소프트웨어로 구현 가능한 Vertex Fog와 하드웨어적으로 고품질의 안개구현을 위한 Distance Fog와 극적 환경 묘사를 위한 Volumetric Fog를 지원하며 수면효과를 위한 Fluid Surface를 지원한다.

지형
하이트맵 기반의 빠른 지형 시스템과 다중으로 복잡한 지형 렌더링과 끊어짐이 없는 블렌드 레이어로 알파-블렌딩 텍스처, 세밀한 초목을 표현하기 위한 장식층과 그 밖의 야외 장소 표현을 위한 장식 메시 절차등을 제공한다.
UnrealEd 상에서 지형을 실시간으로 세밀하게 조종을 할 수 있는 기능을 제공한다. 하이트맵기반으로 생성하거나 툴로 지형을 올리거나 내리거나 평평하거나 완만하게 또는 울툴불퉁하게 만들거나 특정 셀을 지우거나 만들거나 부드럽게 실내 지형과 이어서 동굴이나 빌딩이나 일반적인 인도어맵으로 연결할 수 있는 기능들을 실시간으로 직접 편집 가능하다.

텍스처
아티스트가 컨트롤 가능한 다양한 텍스처 층들과 알파블렌딩과 임의 블렌딩이 가능한 머테리얼 시스템과 사용하기 쉬운 메커니즘의 스크롤링, 회전, 스케일 텍스처링, 그리고 정적 또는 동적 방법으로 정교한 애니메이트 된 효과를 만들어서 사용할 수 있는 특징들을 가지고 있다.
그리고 표면을 가까이 봤을 때 디테일 텍스처의 스케일과 모든 텍스처의 스케일도 조정할 수 있으며 텍스처 애니메이션을 연속으로 변하기 쉬운 반향의 비율에 맞출 수 있다.

녹화 기능
일반적인 demo 레코딩 기능으로 엔진상의 시퀀스만으로 녹화되는 저용량 데모 파일을 지원하며 DivX 무비로 녹화가 가능하다. 그리고 id Software로부터 라이센스 받은 ROQ 동영상 기술을 지원한다. 게임 안 동영상 재생으로는 AVI, ROQ, BIK, MOV, WMV를 비롯한 다양한 동영상 포맷을 재생 가능하며 코덱 추가로 어떤 동영상 포맷도 사용 가능하다.

모든 렌더링 코드는 철저하게 모듈화가 되어서 언리얼 가상 머신에 연동되어 있으며 새로운 기술 추가나 렌더링의 특징을 바꾸고 싶을 경우에 그 부분만 쉽게 바꿔줄 수 있다. 예를 들어 텍스처 레이어를 완전히 새로운 것으로 교체하고 싶다면 모듈화가 제대로 되지 않은 엔진에서는 거의 모든 부분에 걸쳐서 렌더링 코드를 손보고 일일히 연결 해줘야 하지만 철저하게 모듈화가 된 엔진에서는 그럴 필요가 없이 텍스처 레이어를 재작성하고 바뀐 함수들만 손봐주면 되며? 언리얼 엔진에선 철저한 모듈화와 함께 언리얼 가상 머신으로 연동이 되기 때문에 텍스처 레이어만 재작성 해주는 작업으로 끝이 난다.
언리얼 엔진에서는 각 게임에서 필요한 부분만 골라서 만들도록 한번에 모든 렌더링 기술들을 포함하고 있지 않는다. 이는 최적화와도 관련이 있다. 애초부터 게임에서 사용하지 않는 기술들로 필요 없는 연산을 하지 않기 위함과 동시에 언리얼 엔진을 사용하는 많은 게임들이 비슷한 특색을 내지 않게 하기 위함이다. 기본 렌더링 기술 이외에 새로운 기술들의 특징들을 추가하고 싶다면 UDN에서 노말매핑, 퍼픽셀 라이팅, 퍼픽셀 셰이딩등의 기술들을 추가로 지원 받을 수 있으며 UDN의 언리얼 엔진 2.5에 없는 가상 변위 매핑이나 HDR 렌더링 같은 기술들도 불가능한 것은 아니지만 직접 구현해야 한다. 엔진의 철저한 모듈화 구조와 확장성과 언리얼 가상 머신으로의 연결 덕분에 그러한 기술의 추가도 비교적 쉽게 가능한 편이다.
언리얼 엔진 3.0 이후로는 사용하지 않는 기술들을 엔진에 포함하고 있어도 필요 없는 연산을 하지 않도록 엔진의 기능적인 부분에서도 향상 되었으며 비슷한 특색을 내지 않게 하기 위한 게임마다 색다른 다양한 기술의 지원도 더욱 차별화 되게 하였으며 같은 기술이라도 표현방법이 여러가지로 나눠지게 해서 비슷한 특색을 나타내지 않게 엔진을 업그레이드 했다.

물리 : 언리얼 엔진 2.5는 MathEngine사의 Karma 1.3 물리엔진 시스템을 무상 번들로 제공한다. Karma 1.3 물리엔진 시스템은 랙돌 골격 캐릭터 애니메이션, 차량 물리, 게임상의 다양한 물체에 상호작용이 가능한 물리효과를 만들어 준다. 인터넷 멀티플레이시에 차량 물리의 효과적인 데이터 전송을 위한 최적화된 기능과 물리적으로 움직이는 애니메이션이 전송되어 네트워크 프로그래밍에 응답해서 반응하는 애니메이션을 지원한다. 그리고 디자이너가 정확성도 성능을 고려하면서 다양한 충돌과 물리적 움직임을 얼마나 사용할지에 대한 기능을 제공한다.

오디오 : 언리얼 엔진 2.5의 기본 오디오 컴포넌트는 업계의 표준인 OpenAL(ALAudio)을 탑재하고 있지만 다른 오디오 컴포넌트로 쉽게 교체가 가능하다. 보이스IP 기능으로 인터넷이나 네트워크 보이스 커뮤니케이션 지원과 텍스트를 음성으로 변환해 주는 기능, A.I.에게 음성으로 명령을 내리는 기능 등이 제공된다.

네트워크 : 언리얼 엔진 2.5의 기본 네트워크 모듈(IPDrv)은 매우 작은 수준의 LAN 게임과 큰 스케일의 서버에 기초한 인터넷 게임도 지원하며 동일 네트워크 엔진상의 인터넷상의 서버들을 쉽게 찾고 접속할 수 있으며 서버나 클라이언트의 새로운 컨텐츠들은 서로 쉽게 자동으로 주고 받아서 바로 게임에 적용될 수 있게 구성되어 있다. 자바 클라이언트 사이드 스크립트 언어로 작성되었으며 최소한 28.8K 모뎀으로도 게임을 무난하게 즐길 수 있다.

5. 프로그래밍

언리얼 엔진에서의 프로그래밍은 크게 외부 언어와 UnrealScript 프로그램으로 나누어진다. 모든 프로그래밍은 객체지향적 설계를 기반으로 모듈화가 되어있으며 Unreal Virtual Machine에 연결되어 프로그램 전체가 연동되어 작동한다. 외부 언어를 써서 연결 시켜도 부드럽게 연동된다. 프로그래밍은 외부 언어만 쓰거나 UnrealScript와 동시에 사용할 수도 있으며 UnrealScript로만 작성할 수도 있다. 게임 프로그래밍은 주로 UnrealScript를 쓰는것이 빠르게 작업 상황을 확인 가능하고 수정 및 재컴파일이 용이하고 시스템 리소스를 적게 차지하며 속도도 빠르다. UnrealScript는 엔진의 모든 부분이 모듈화된 구조의 코드를 가지고 있으며 특히 게임 프로그래밍 부분에선 Player, Monster, Inventory, Triggers, A.I. 등에 맞게 기초 설계된 모듈 구조를 기본적으로 가지고 있으며 state와 state 수준의 함수들과 시간에 기초한 latend 함수와 networking을 지원한다. 이 스크립트 언어에서 높은 수준의 A.I., 길찾기, 그리고 Navigation System을 프로그래밍 할수 있다. 이 부분은 게임의 특성이나 구현하고자 하는 특성에 따라 임의적 변경이 가능하며 완전히 새롭게 작성도 가능하다. 이 프로그래밍들은 UnrealEd내에서 즉시 컴파일해서 그 결과를 새롭게 별도의 게임 프로그램을 통한 실행 없이 UnrealEd 자체에서 직접 실행하여 진행상태를 확인하면서 실시간으로 작성 및 수정이 가능하다.

위에서 많이 설명 했던 것으로 언리얼 엔진의 가장 큰 특징이자 장점은 언리얼 가상 머신을 통한 엔진 전체의 유동적인 환경과 엔진의 확장 및 개선 개조, 개량 변형 등이 역동적으로 이루어지며 새로운 기능의 추가 등의 작업을 할 때 엔진에 쉽게 연동할 수 있다는 점이다. 이 장점은 엔진을 사용하는 프로그램의 덩치가 커지고 복잡해질수록, 그리고 유지보수가 매우 중요한 MMO 프로그램 타입, 그리고 회사 내부에서 여러 게임을 위해 멀티플 라이센스를 취득한 후에 장르나 게임의 특징도 모두 다른 게임들을 만들더라도 코어 엔진 개선 팀을 두거나 각 게임들에서 엔진을 개선한 부분들을 장점들이나 필요한 점만 서로 공유할 수 있는 부분에서도 여타 일반적인 엔진들에 비해 매우 용이하다.? 이 점은 앞으로 게임 개발 비용 상승으로 인해 더욱 중요해지는 점이며 한 게임을 멀티 플랫폼으로 발매할 때도 유리하게 작용하므로 이 장점은 더욱 부각될 것으로 전망된다. 최신의 언리얼 엔진 3.0 이후로는 구조적 부분이 더욱 강화되었다고 한다.

  1. Sarah Strattman 2011.07.26 18:29 신고

    a good discussion can be started on this post,as i do not fully agree with you,but nevertheless,good post.

출처 블로그 > 화니의 블로그
원본 http://blog.naver.com/hslson/20215663




몇일전 id사와의 접촉을 통해서 알게된 정보들입니다. 엔진 라이센스 하시는 분들에게 조금이라도 참고가 됐으면 하는 마음에 올려 봅니다.Quake Engine
GNU General Public License


Quake 2 Engine
GNU General Public License

Quake 3 Arena Engine
GNU General Public License

이 3가지 엔진들은 이미 소스코드를 공개했고 GPL의 저작권 보호를 받고 있습니다. 퀘이크 1,2엔진 중 하나라도 상업적인 용도로 이용 할 경우 각 타이틀 및 플랫폼당 $10,000를 id 소프트웨어에 지불해야 하고 돈을 지불하면 SDK 관련 툴과 풀소스를 제공해줍니다. 퀘이크 3엔진을 상업적인 용도로 이용하려면 $50,000의 돈을 지불해야하고 돈을 지불하면 SDK 관련툴과 Return to Castle Wolfenstein 및 Enemy Territory의 풀소스(모든 넷코드 포함)와 스크립트 시스템을 포함합니다.

Doom 3 Engnie
가격은 계약조건에 따라 달라지지만 웬만한 게임엔진들보다 월등하게 비쌉니다. 라이센스 계약이 성립되면 Doom 3 및 Quake 4의 Full Source code와 관련 SDK 툴과 소스가 제공됩니다. 그리고 이전의 모든 엔진 소스를 보너스로 제공합니다.

MegaTexture
Doom 3 Engine을 계약 한 상태라면 이 메가텍스처라는 기술을 싼 가격에 추가 계약이 가능합니다. 처음부터 메가텍스처 기술을 라이센스 해도 가격은 Doom 3 Engine + MegaTexture와 같고 Doom 3, Quake 4의 풀 소스 및 이전의 모든 퀘이크 엔진들이 포함됩니다. 이 기술은 id의 third-party사에서 제작하는 Enemy Territory : Quake Wars라는 게임의 소스가 기반이며 큰 틀은 Doom 3 Engine에서 변한게 없지만 렌더러가 최신 하드웨어 기반으로 재작성 됐으며 물리 시스템이 재작성 되었고 SDK에 지형 관련 부분이 추가됐습니다. 그리고 이전의 엔진들과 다르게 OpenGL뿐만 아니라 Direct3D에 기반한 렌더러가 추가된게 특징입니다.

id Extend
이 기술은 Doom 3 Engine에 기반한 MegaTexture 기술이 기초가되며 MegaTexture 기술까지 계약 된 상태라면 약간의 추가 비용으로 추가 계약이 가능합니다. 이 기술은 id의 third-party사의 New Wolfenstein과 id의 미발표 차기작에 쓰일 기술이라고 합니다. MegaTexture와의 차이점으로 3가지의 신 기술을 포함했습니다.
그 3가지의 신 기술은 High Dynamic Range rendering, Displacement Mapping, High quality Bleeding edge soft shadows라는 최신 그래픽 기술 알고리즘입니다. 이 기술을 라이센스 해도 이전의 모든 엔진들이 포함됩니다.
















Software Rendering School, Part IV: Shading and Filling ???????
 
 


15/06/2004






Legality Info


The tutorials in this series as well as the source that comes with them are intellectual property of their producers (see the Authors section above to find out who they are). You may not claim that they are your property, nor copy, redistribute, sell, or anything of the nature, the tutorials without permission from the authors. All other articles, documents, materials, etc. are property of their respective owners. Please read the legality info shipping with them before attempting to do anything with them!

Shading and Filling


Haya! Did we miss you? Oh we know we did! Welcome to another exciting article about software rasterization. You’re about to make one more step towards the knowledge of software rasterizing.

Let’s quickly review what we are going to discuss this time:

  • Depth sorting

  • Gouraud shading

  • Phong shading

  • Fill convention


The thing I wanted to talk about first is depth sorting. As you will notice in the source to the previous article, I order the polygons such that the polygons that are most far away are drawn first and the polygons that closer to the view point are drawn over them. But why is that? Well think about how a painter draws a picture. He first draws the background and then the objects over it. It wouldn’t be very smart to draw the objects first, would it?

We draw our geometry in the same manner as the painters; polygons far away must be obscured by polygons closer to us, thus drawn before. That’s why this algorithm of depth sorting is called “painters algorithm”. How do we do it then?

Pretty simple! For each polygon in the object, sum the z components of all vertices and then divide the result by the amount of vertices. For example, if you have a triangle composed of 3 points A, B and C, you’d compute the centroid?as follows:

centroid = (Az + Bz + Cz) / 3

After you’ve found the centroids of all triangles, sort them such, that the polygons with the bigger centroids are drawn first. That’s pretty much all of it. Keep in mind that this simplified version of the Painters algorithm suffers from a lot bugs and side effects (as you’ll see in the demo) but for the most part it’ll take care that your polygons look correctly rendered. Later in the series we’ll see more advanced depth sorting algorithms (Z-buffering, BSP trees, Portals, etc.) but for now I think the simple Painter's algorithm is enough for now.

Next stop ? Gouraud shading: It’s time you learn some more advanced shading techniques than the simple flat shading, which simply fills the whole triangle with a given color. Of course the flat shading is also a somewhat good shading-technique for some situations but combined with lighting it makes the objects look like they’re made from many small faces (hmmm, they are in fact :) ).

However with Gouraud shading we smooth the faces and give the objects a more decent look. So what is it and how do we do it? Both questions are easy to answer. As already stated for flat shading, we simply define a color with which we fill the whole polygon. However, when doing Gouraud shading we define a color for each vertex of the polygon. In the rasterization process we interpolate these colors over the edges and then over the scanlines. May sound a little weird so let’s see some schemes, shall we?


Here we see a triangle defined by the points A, B and C. Each vertex also has a color defined: Acol, Bcol and Ccol for A, B and C, respectively. What we do is we interpolate the colors over the edges just as the x coordinates are interpolated when we define the start and end points of all scanline; the?only difference is that we define start and end colors. So, as we trace x along the edge A -> B in order to find the start coordinate of the scanline, we trace the color in pretty much the same way. We trace the color along the other edge A -> C too. So for every scanline we should have a starting and ending color value just as we have start and end x coordinates. Let’s see the process in a little bit more detail:


In the beginning of the rasterization setup, we calculate the initial values of x and the slopes for the edges. Then we start sliding x along the edges to find the start and ending coordinates of current the scanline. This is exactly the same thing we would do with the colors (see figure to the right):

At the place where we set the initial x and calculate the slopes we will set the initial color value and will calculate the color slopes, which we will add to the color value as we move along the edges. The initial color value is Acol for the triangle above and the slope is calculated just as the x slope. For example the color slope for the A -> B edge will be:
               Bcol - Acol
color_slope = -------------
By - Ay

As we already know this will simply return the amount of change in color as we increment y. Not that weird, eh? There is only one more thing to consider before we’re done with the Gouraud. If we step the color values along the edges just as we step the x coordinates, for each scanline we will have a start and end color value. See this picture below:


So, at the funny lookin’ picture above we see a scanline in the triangle with start points L and M. Along with the start and end x coordinates we now have start and end color values, which we will have to interpolate along the scanline just as we did along the edges. Just as before we set the initial color value to Lcol (according to the scanline above) and calculate the slope. The slope this time is equal to:
                        Mcol - Lcol
color_slope_this_time = -------------
Mx - Lx

which is simply the change in color as we increment x. For the color of each pixel of the scanline, use the color value that you’re interpolating. And that’s basically it! We’ve just done Gouraud shading! You’ll see the huge difference right away! Of course I’ve demonstrated the things only with one color value but if you want an RGB color mode you simply do the same thing for each separate color channel and write the RGB pixels when interpolating the colors along the scanlines. The demo I’ve wrote runs in 32 bpp mode so you’ll see the stuff with multiple color channels.

The problem with Gouraud shading, as with everything else, is speed. However in this case, the problem is big. You might not feel a big speed hit when interpolating one color channel but with 3 or more it runs pretty slow and even with low-level ASM optimizations it’s still not speedy enough. Also when dynamic lighting gets involved, everything becomes ultra-slow (Gouraud shading is mainly used with per-vertex dynamic lightning). So, in the future we will not use Gouraud shading much (or maybe not at all) but texture mapping and static lighting instead.

There is just one issue that we have not yet discussed concerning Gouraud shading; perspective correction. It's very hard (near impossible) to see that the colors in the polygons in the demo “swim” around. The reason is that we're doing a 3D interpolation in 2D space, but the artifacts are not visible with Gouraud shading. The fix to this will be covered later on when we get to texture mapping.

Ok, we've covered everything there is to know about Gouraud shading so let’s move on to the other topics.

Next is Phong shading. To explain this type of shading I’ll first have to explain a little bit about lighting. Just keep in mind that I’ll explain lighting in MUCH more detail in the next article. So, when we light the polygons in our world we perform some calculations with the main help of a surface normal and a bunch of other stuff. So when working with?flat shading we simply use the normal of the polygon and then use the color, which we‘ve found with the lightning calculations to draw the whole polygon.

When using Gouraud shading we perform almost the same trick. But instead of using the polygon’s normal, we calculate the normal for each vertex of the surface, then we perform lightning calculations for each vertex and finally we interpolate the colors we’ve found along the polygon using Gouraud shading. This will smooth out edges and corners in objects.

The Phong shading takes this one step further. The main algorithm is this:

  • we calculate vertex normals

  • we interpolate the vertex normals along the polygon edges (just as we would interpolate colors for Gouraud shading)

  • we interpolate the start and end vertex normal along each scanline, so in fact we find the normal for each pixel!

  • finally we perform lightning calculations on all pixels with the help of the pixel normals and set the pixel’s color to the resulting color.


It might sound somewhat simple to implement but there are many, many things that are tricky with Phong shading. First thing is of course speed. There ain’t a processor or/and a 3D accelerator on the planet, which will be fast enough to do Phong shading in real time. It’s perfectly fine to do Phong shading in off-screen rendering although a raytracer will be more suitable for this purpose (but that’s another story!).

“So did ya fill our heads with all that crap, that we will never need?” Well, yes, I agree completely but Phong shading should definitely be known. Believe me it’s worth it ;) Ok let’s move to the next and most important topic ? Fill convention.

Why is the fill convention so important? Well, it solves all kinds of color/texture bugs that will occur in your rasterizer if you simply swap the coordinates to integer positions. The fill convention also improves visual quality of the polygons. It removes all kinds of jittering, texture/color swim and etc. Nasty things you will have to face without a fill convention.

So let’s see what that fill convention is all about. It can be defined as follows: by applying a fill convention you make sure that only pixels, which are inside the polygon based on some reference point inside the pixel.?Let’s see this here:


Here we see a triangle rasterized on a pixel grid. Now let’s analyze what’s wrong with the picture above. First, note that when we convert the y coordinate of the most upper point to integer coordinates, the new position will be located above the default point itself. This is very, very bad. Let’s imagine for a second that we want to use Gouraud shading for the triangle in the scheme above. We set slopes, initial values and everything as normal. But! When we convert the y position of the top-most point to integer coordinates the new point will be above the default point. Therefore the edges of the triangle will become longer. Now when we interpolate the colors along the edges they will in some moment pass the end values and will get out of the range. The problem for colors might not be so big but for a texture mapper this will almost always crash the program.

So we need to avoid this situation. The solution for this problem is pretty simple. We will only light pixels, which have their top-left corners in the polygon. Let’s see this in practice:


There! You don't need to care that some of the pixels are not lit because they will be rendered from the neighbour triangles. However there is one last little thing to consider. Say we have an edge that lands exactly on the top-left corner of some pixels. The problem here is that the neighbour triangles as well as the current one are going to light those pixels! However this should never happen in a good rasterizer. Check the picture below:


These are two neighbor triangles. The pixels lit by the first one are in blue, by the second in yellow and by both in light blue. To avoid that problem we will further define our fill convention to be “top-left” fill convention, which states the following: if the edge in question is top or left for the polygon and it falls exactly on the reference point, then the pixel will be included in the polygon. Respectively the pixel won’t be included if the edge is bottom or right. Don’t worry about it too much though since, as already stated, the skipped pixels will be lit by the neighbour polys.

The thing is that the big 3D hardware APIs OpenGL and Direct3D also use top-left fill convention (and every other renderer that applies fill convention as far as I know) so your polys will have the same quality as the ones rendered with OpenGL and D3D :D.

The big issue that's left is the implementation. Hopefully this is the easiest part of all! For the top-left fill convention where the reference point is the top-left corner of the pixel we simply do this: instead to snap the starting and ending y positions of the triangle edges before the start converting to integers, we calculate them as follows:
    startY = ceil( top_most_point_of_triangle_y );
endY = ceil( bottom_most_point_of_triangle_y ) ? 1;

I think the ceil and the floor math functions should not be explained but here is a short one:

ceil(x) ? calculates the smallest integer bigger than or equal to x
floor(x) ? calculates the largest integer smaller than or equal to x

Now to avoid graphical artifacts we also have to update the interpolants too, since the new start position is a little bit below the initial one (or it remains unchanged if it’s already an integer). So we do this:
    x1 += slope1 * (startY ? top_most_point_of_triangle_y );
x2 += slope2 * (startY ? top_most_point_of_triangle_y );
c1 += cSlope1 * (startY ? top_most_point_of_triangle_y );
...

where x1, x2, c1,?... are the values we’re interpolating along the triangle’s edges and slope1,?... are their respective slopes.

The exact same thing must be done for when rasterizing a scanline:
    startX = ceil( x1 );
endX = ceil( x2 ) ? 1;
col += cSlope1 * (startX ? x1);

where x1, x2 are the start/end coords of the scanline and col is the color that we’re interpolating along it.

I think we’re done! It’s important that you learn fill convention and to start doing it right. Please if you do not understand the described facts in this article, revisit some other documents about it or ask in the forums.

Also look at the demo and the source coming with this article. You’ll see a huge difference in the rendering quality compared to the demo for the previous tutorial.

Well, I guess that’s all this time folks! The tutorial is pretty big this time but why the hell should I care ?! The DevMaster guys never set me no space limit!?Time to say goodbye, folks! ‘Till the next article I advise you to try what you’ve learnt, whit some pesky DOS compiler under mode 13h or under SDL with the cool display system that I’ve written and which you can find in the sources.

Next time we will discuss lighting in detail and maybe one or two small stuff two. After that we’re getting right into the interesting things like clipping (sorry no homogeneous stuff) and texture mapping! Das war alles! Bis spaeter, tschuess!

Download source code for this article

원본 사이트 : http://www.devmaster.net/articles/software-rendering/part4.php
















Software Rendering School, Part III: Triangle Rasterization ???????
 
 


26/03/2004






Legality Info


The tutorials in this series as well as the source that comes with them are intellectual property of their producers (see the Authors section above to find out who they are). You may not claim that they are your property, nor copy, redistribute, sell, or anything of the nature, the tutorials without permission from the authors. All other articles, documents, materials, etc. are property of their respective owners. Please read the legality info shipping with them before attempting to do anything with them!

Triangle Rasterization


In this article we will discuss one of the most important basics of the computer 3D graphics ? triangle (polygon) rasterization.

Perhaps there is no need to explain why this is important, but for those of you who aren't quite certain yet, we’ll explain. All 3D objects that we see on the computer screen are actually made of tiny little geometrical objects often called primitives. Quadrilaterals, triangles, n-gons etc. are example of primitives. We will concentrate on triangles mostly because of one main reason: every object can be split into triangles but a triangle cannot be split into anything else than triangles. Because of this, drawing triangles is a lot simpler than drawing polygons of higher order; less things to deal with. This is why those triangles are so commonly used in computer graphics.

By knowing how to properly draw triangles, one has the ultimate power to deal with all kinds of cool 3D stuff. Of course, if everything you want to do is to draw a few dumb-lookin’ billboards or sprites you can skip reading about rasterization. But you should continue reading if you’re interested in doing solid 3D objects. Now, let’s see what we need to know in order to start drawing triangles in a 3D world.

Every child after 1st grade will tell you that a triangle has 3 sides and therefore is composed of 3 points. In our case this will be vertices, which aside from other information like color or texture indices will also contain the coordinates of the triangle points.

So, how do we convert a triangle defined in 3D space to 2D (screen) space? Well, very easy! A triangle in 3D is also a triangle in 2D space, so the only thing we have to do is to project the triangle’s points from 3D to 2D space and rasterize it.

Right, that's all neat and simple, but how do we rasterize the beast on the computer screen?! Now that's a hard one! The triangle rasterization is fighting about being the slowest and most sophisticated process in a 3D engine. Imagine a 3D scene with hundreds or thousands of triangles and all these have to be rendered with all the maths to them. Also remember, the triangle rasterization described in this paper is only the top of the iceberg.

Back on topic. After you’ve projected the points of your triangle, it is time to rasterize it. Rasterizing means that you draw it on the computer screen, which is a raster grid (discrete pixels). The best method known to the human kind for polygon rasterization so far is the method of the horizontal spans. A span is simply a synonym word for a horizontal line in our case. Let’s have a look at this image:


As you can see a triangle can be viewed as a collection of a horizontal lines starting from the top of the triangle, going down to the bottom. When rasterizing a polygon, we find some information about the spans that build it (start/end points etc.) and then we simply draw the spans on the screen (we believe drawing horizontal lines won’t be much of a problem…). The process of converting a polygon to a set of horizontal lines is called "scan-conversion" and we'll describe this in detail with all the math involved. Most tutorials online just show some C source for the reader to wade through :) (We learnt from those resources though).

As mentioned above, in order to define a horizontal line, we must define a start and end point for it. This is pretty simple business in theory, we just have to trace lines to the left and to the right and draw spans in between. To find them, we will have to linearly interpolate the triangle-edges (or sides). This isn't the best place to explain linear interpolation but you can think of it like this:

Linear function:
    f(x) = ax + b
a = const
b = const

You also know that: f(x+1) ? f(x) = const

So, in the case of linear interpolation, we can see that f(x) is changing with a constant value each time we increment or decrement x. This will be quite useful for us.

Let’s see how:


It’s clear that the line above is defined by two points A and B. We also have a point C, for which we only know the y coordinate (we simply loop from the top point to the bottom point of the line). We will have to interpolate between B and A in order to find the x coordinate that we want. First we find with what amount the x coordinate will progress if we increment y on the line that we’ve defined by A and B:
    slope = (Bx ? Ax) / (By ? Ay)

Now to find Cx for any y coordinate, do this:
    Cx = Ax + slope * (Cy ? Ay)

One way to interpret "slope", is to think like "we go x per y", which is basically what the division-operator calculates. So, for every new scan-line we go to (for every new y), we add the value of "slope" to x. This means "add x per y to coordinate".


Given the picture, we see that if we interpolate x coordinates from V1 to V3 for every y we will get the start points of our spans. In the same way, by interpolating x from V1 to V2 and from V2 to V3 for every y, we get the end points for our spans. If you have ever come across drawing lines on the computer, you have surely seen interpolation before. This time though, there is no need to account for slopes < -1 or slopes > 1, because gaps will be filled up by the spans we will draw later on. So what we got for the sides of the triangles is three very simple line routines.

After you’ve scan-converted all of the triangle’s (polygon’s) edges all you have to do is to draw the spans with the color specified for the primitive. And that's it, neat and purdy.

We know that it is pretty complicated and messy right now, but it will all become clear with time (at least it became for us ;) ). Anyway check out the sample source code to see how to implement the rasterizing process. It is well commented so we guess it will be easy to grasp. If not, send us an E-Mail and we promise reply. Now that we know how to render triangles, we can start with the fun stuff :). Next time you will learn how to gain speed using fixed-point math, how and why to do depth sorting and how to do some basic kinds of shading. Maybe something more, maybe something less ;).

Download source code for this article

원본 사이트 : http://www.devmaster.net/articles/software-rendering/part3.php
  1. us overnight payday loan 2011.09.27 05:35 신고

    Thanks for posting do you have feed here? I’d like to add them to my reader

  2. quick payday loan 2011.09.28 02:25 신고

    Once Once again great Article. You Appear to have a good Knowing of these themes.When I entering your blog,I felt this .

















Software Rendering School, Part II: Projection
 
 


30/11/2003






Legality Info


The tutorials in this series as well as the source that comes with them are intellectual property of Mihail Ivanchev and Hans T?rnqvist. You may not claim that they are your property, nor copy, redistribute, sell, or anything of the nature, the tutorials without permission from the authors. All other articles, documents, materials, etc. are property of their respective owners. Please read the legality info shipping with them before attempting to do anything with them!

Projection


Hi again! Back for some more software rendering huh? Very well! This time we will talk about projection and you will also see some sample source. We won’t waste time and space so let’s move on...

Projection in itself isn’t a very hard concept, which we will show eventually. But, after this tutorial is through with everything it’s supposed to cover, there will be some unanswered questions. The reason is that the solutions to the problems are tricky and should be taught when the general projection-procedure is fully understood. Don’t worry, we will cover the problems in this text and, even better, we will release a tutorial later on to solve the problems.

Now, let’s get back to the point.

Projection is usually a method which converts 3D coordinates to 2D screen coordinates. We will cover two methods in this tutorial, and to make things simple, we’ll start with the most basic projection method known to the human kind ? the parallel projection.

Parallel projection, which is also known as orthographic projection, simply converts the 3D coordinates to 2D coordinates by completely ignoring the Z-coordinate (the depth value). This results in the effect that each vertex will be mapped to the screen exactly as it appeared in the 3D world, dead on. The drawback with this methods is that we see things flat and equally sized, no matter how far away objects are. We need to see the stuff in perspective as our eyes does. However, keep in mind that this projection technique is perfectly suitable for some cases, like for example the viewports in a 3D modelling program. Here is a little sample image of how the parallel projection works:


The next question is of course how to use the parallel projection to project points. Well here is some code:
	x’ = x + viewportWidth/2
y’ = -y + viewportHeight/2
z’ = 0

where x’, y’, z’ are the coordinates of the screen point and x, y, z are the coordinates of the 3D point.

We add half of the sizes of the viewport because we would like the origin of the 3D world to be in the middle of the screen. The origin of the screen is in the upper-left corner, however. That’s why we shift the coordinates a little to center them on the screen.

It’s pretty easy to transform the point by multiplication with a matrix and here’s the projection matrix for the parallel projection:
           | 1 0 0 w |
matProj = | 0 1 0 h |
| 0 0 0 0 |
| 0 0 0 1 |

where w = viewportWidth/2 and h = viewportHeight/2. Multiply a 3D point by this matrix to get the screen point.

Now let’s move to something more useful and better looking ? the perspective projection! The perspective projection is just about everything we need in order to start doing cool looking 3D graphics. To get things going, here’s another image for you to look at:


The further away an object is, the more it shrinks on the screen. The reason is that light-rays converge into our eyes.

Objects emit or reflect light and the rays can go in almost any direction. Now, only a part of this light reaches our eyes and these rays go from all points of the objects towards a point in our eyes. Therefore, the rays converge into a point, making distant objects smaller visually than close objects, because then the rays have converged a lot when they reach our eyes.

Compare the sphere and the cube in the above image to see this effect. So, how do we perform this converging? This is a simple matter. Which is best solved with another image:


where:

v = 3D point
v’ = projection screen point
y, z = Y- and Z-coordinates from v
y’ = Y-coordinate from v’
d = distance to projection plane
α = Field of View (FOV)

The projection plane could be thought of as the screen, where all 3D points are projected onto. We want to solve y’, the Y-coordinate for the screen. If you look closely, you can see that there are two similar triangles: eye-v-Zaxis and eye-v’-Zaxis. This gives us:
     y’    z’    d
--- = --- = ---
y z z

The distance to the projection plane is the same as the Z-distance for the projection plane. Solving for y’ gives:
    y’ = d * y / z

And that’s it! Do the same for X and you got your 2D screen coordinates.

One thing remaining, what is d? For this, we need to make use of some trigonometry. First of all, we assume the projection plane is going from -1..1 in X and Y (making it 2x2):
             1
tan α = ---
d

1
d = ------- = cot α
tan α

And we got d. One thing to take care of is the aspect ratio of the screen. To make sure things get projected correctly for rectangular projection planes, you must scale the d for the X-projection by the aspect ratio. Usually, d for the Y-projection is kept as the source. Remember also, that our projection plane’s dimensions ranged from -1 to 1. In order to make that fit our screen, or viewport, we get:
    d = cot α
dx = (d / (width / height) + 1) * viewportWidth/2
dy = (d + 1) * viewportHeight/2

For dx, we scale d by the aspect ratio of the screen. Then we add 1 to create the range 0..2. At last, we multiply by half of the viewport width to get correct pixel coordinates. Pretty much the same for dy, except there is no aspect ratio correction.

It’s quite some math involved, we admit that. But we give you a thorough walk through the concept of perspective projection, something that will come in handy later on.

Now after we’ve seen the good sides, let’s see the bad too. First of all, you MUST make sure that the Z-coordinate of the vertices will NEVER be 0 because the perspective projection will cause a division by zero. There are methods to avoid that though, one of them is to add some kind of offset to the z coordinate of the vertices (like 0.001 or 0.0001) before projecting. Although simple this method is not very correct or robust. The best way is to set a near clipping plane just in front of the projection plane and clip the vertices against it. Clipping will be discussed in another tutorial but we wanted to tell you that there are solutions. When we clip against some plane we make sure that everything behind the plane will not be considered for further processing.

Another related, very serious problem is that when the Z-coordinates get negative the X- and Y-coordinates get flipped. To avoid this problem one can use the same trick as for the z equal to 0 problem ? to use a near clipping plane.

Well that’s it about projection really. It’s enough to start creating some neat graphics.

This tutorial also comes with sample commented source in which you can check the stuff that you don’t understand and to see how projection is implemented practically.

Next time, we will talk about geometry in 3D and triangle rasterizing. 'Till then, cya!

Download source code for this article

원본 사이트 : http://www.devmaster.net/articles/software-rendering/part2.php
















Software Rendering School: Part I
 
 


19/10/2003






Legality Info


The tutorials in this series as well as the source that comes with them are intellectual property of Mihail Ivanchev and Hans T?rnqvist. You may not claim that they are your property, nor copy, redistribute, sell, or anything of the nature, the tutorials without permission from the authors. All other articles, documents, materials, etc. are property of their respective owners. Please read the legality info shipping with them before attempting to do anything with them!

Introduction to the Series


Hello computer fans and welcome to our software rendering school, which is a series of tutorials that will show you the interesting world of software rendering. The tutorials are written by me, Mihail Ivanchev, and my pal, Hans T?rnqvist, and we hope that you’ll enjoy them as much as we’ve enjoyed writing them.

What exactly is the goal of these tutorials? The goal is to teach you the basics of 3D graphics. The further you read, the more advanced the techniques will become. Each tutorial comes will full source code and executables for at least two platforms (Windows and Linux). Feedback is welcome so that we can correct any bugs, problems, etc. and of course, so that we can make better tutorials.

Why learn software rendering when hardware accelerators are loved and used by all? First, in order to work with advanced APIs, like OpenGL and Direct3D that work with the hardware, and to achieve as much as possible from them, one must know how they work and what in fact they do. Second, we are entering an age when people demand great visual quality and flexibility, which sometimes is impossible to do in hardware. Dual CPUs are starting to get cheaper and more popular which will aid us a lot. The good thing with dual CPU-systems is that while the first processor is handling general game code (managing resources, calculating AI, generating sounds, etc.), the other can render the visuals, almost exactly like a GFX accelerator. The obvious difference is that we can control a CPU much more freely than a GPU (try write a game on a GPU without the help of OpenGL or Direct3D ;)). Of course not many people have dual CPU-systems but not many people had GFX accelerators 5 years ago either.

What will the tutorials include? Well, basically everything you need to know to start programming 3D applications; mathematics (oh boy!), polygon rasterization, depth buffering, hidden surface removal, texture mapping, mip-mapping, multi-texturing, blending and so on. Sounds tasty and crunchy, right?

The tutorials include a large number of diagrams and schemes to make things easier to grasp. We will also recommend articles and other tutorials so you can study things we did not fully cover, or to check that we’re no big fat liars!

Now it’s time to move on. The first tutorial will introduce the basic mathematics of the 3D.

Basic Mathematics


In this tutorial we go through a short explanation of 3D and some of the math that we will need in the future tutorial. We do NOT explain all of it since that would take way too much time and space. Some information we've left out in this tutorial may show up later on. If you’re interested in knowing more, than visit some of the links that are given at the end of the tutorial. Well let’s get on with it!

The Real Basics


Let’s start with what 3D really is. This is a subject often overlooked in other tutorials; they never explain what 3D actually is. It's not as hard as some of you might think.

I hope you are familiar with 2D graphics programming and the all-favorite coordinate systems that are used there. We always have two axes, let’s call them X and Y, perpendicular to each other. Now if you want to put a point on the screen using this kind of coordinate system, you must choose a coordinate on the X axis and a coordinate on the Y axis for the point. Let’s look at a small example for all this:


The graph above shows where the point with coordinates (x;y) is on-screen using that coordinate system.

But when we move into 3D, we add a new dimension, which means another axis for the coordinate system. How do we handle this? The screen is always flat and we use a 2D coordinate system to draw on it. Well, this is the main issue of 3D graphics; to project 3D coordinates to a 2D coordinate system so that they can be drawn to the flat screen. So how do we convert the coordinates from 3D to 2D? We will come to this later, we mentioned the 3D problem for the people with some doubt in their minds.

First, you have to learn how to represent coordinates in 3D. Let’s see how a point in 3D coordinate system will look like:


This time, you see that the point is given with 3 coordinates for 3 axes (x; y; z). Nothing strange about 3D, that’s all the super-basics there is to it actually.

Objects and Operations in 3D Space


Let’s begin with how to represent objects. Everything in 3D is built up by points called "vertices". A triangle has three vertices, for instance. A vertex is a point plus possible various specific parameters. Note that there is a significant difference between vertices and points; points are simply (x; y; z) while vertices may have more data bound to them. But the most prominent difference is that points are random positions in space, vertices are positions bound to primitives (like triangles or entire objects). You may call us picky with details like this, but if there is a difference between two things, saying they are equal is totally wrong.

In the beginning, we do not need any additional data for vertices than the point. Here is some sample source for how a vertex can be defined:
typedef struct
{
float x, // Position on the X axis
y, // Position on the Y axis
z; // Position on the Z axis
} vertex;

Remember about vertices that they are actually nothing more than simple points for now. The operations that you can perform with points are not very complex but they are the basis for all 3D representation. They are translation, scaling and rotation. To help out, translation is just a fancy word for moving. Let’s see some sample code:
    new.x = point.x + translate.x;
new.y = point.y + translate.y;
new.z = point.z + translate.z;

As you can see from this code, translation is really nothing much than a simple movement. The main reason we believe is that you most often use this operation for translating objects between coordinate systems. This will become apparent later on.

Imagine you move the point in the picture above along the axes. We sure hope you understand what this is good for.

The next operation is scaling:
    new.x = point.x * scale.x;
new.y = point.y * scale.y;
new.z = point.z * scale.z;

From this code we see that scaling is also some kind of movement but we multiply the coordinates by the scale values. The point is scaled relatively to the origin. If we scale by factors of 2, the new point will be as double as far away from the origin as the original one.

The third and most complex operation is the rotation. Here is some sample source:
    // X-Rotation
new.x = v.x
new.y = v.y * cos(a) ? v.z * sin(a)
new.z = v.y * sin(a) + v.z * cos(a)

// Y-Rotation
new.x = v.x * cos(a) ? v.z * sin(a)
new.y = v.y
new.z = v.x * sin(a) + v.z * cos(a)

// Z-Rotation
new.x = v.x * cos(a) ? v.y * sin(a)
new.y = v.x * sin(a) + v.y * cos(a)
new.z = v.z

There are deep explanations to why this works but we will not get into them. Just remember that this is the most common way to rotate something. There are tons of other ways but they are a lot more advanced and hard to use and work only in specific cases.

Now it’s time for something more complex (ok a lot more complex). It’s time to take a look at vectors. First question of course is what vectors are. This is quite a complex question, because vectors can be described in so many ways. The most important are position, direction and magnitude, but we will not use them for positions.

One way of expressing directions is to use angles. This however is very impractical in 3D, since angles induce trigonometry which is slow. We will try to keep away from trigonometry as much as possible.

Fortunately, we can express directions with a "position". Observe that vectors are not physical nor objects, they are merely imaginative helps. For example, we have a latitude and longitude grid over the Earth, but we do not see it. It's the same with vectors, we do not see them but they help tremendously. Another important feature is that they are “nowhere”. Let's take the Earth as an example again; we do not know where in Universe our planet is. There is no definite reference origin except the Earth itself. The same with vectors, there is no real reference origin for them since they describe ONLY direction and possibly speed, so we need a new origin for every vector. This origin will be the numerical origin, (0, 0, 0). To define a vector we need at least two points. We have our first (the origin) so we need a second. This is how we will be defining vectors in the most cases. Only with one point since we assume the second one to be the origin. This will make the work a lot easier as you will later see. Now let’s see a simple example of a vector:


As you can see the vector on the graph above is defined with exactly two points, the origin and the point V.

We understand that this is all confusing, but it’s in fact very simple. Say we want to show a direction to the right:
    (x; 0; 0)

where "x" is a positive value. If we want a vector that points straight down:
    (0; -x; 0)

where again x is a positive value.

To help even more, let's look at a car in motion relative to the ground. The position of the car is actually a position vector, describing the location of the car relative to everything else. The direction, however, is a directional vector only showing where the car is heading. The directional vector is nowhere in space, but shows numerically related to (0, 0, 0) what the direction is. The best way to learn vectors is to see them in action and how they work, so let's get into that. We will now delve into some basic mathematics with vectors, which we need in order to explain the more complex ones.

Addition of two vectors, the simplest operation, produces new vector:
    new.x = v1.x + v2.x;
new.y = v1.y + v2.y;
new.z = v1.z + v2.z;

As you can see from the sample code above you only have to add the coordinates for the corresponding axes.

The subtraction of two vectors looks just like the addition:
    new.x = v1.x ? v2.x;
new.y = v1.y ? v2.y;
new.z = v1.z ? v2.z;

Not hard, huh? Ok let’s move to multiplication:
    // 1. Two vectors
new.x = v1.x * v2.x;
new.y = v1.y * v2.y;
new.z = v1.z * v2.z;

// 2. One vector and a scalar
new.x = v.x * scalar;
new.y = v.y * scalar;
new.z = v.z * scalar;

As you can see there are two types of multiplication. You can multiply a vector with another one or vector with a scalar. The division of two vectors (or vector with scalar) looks exactly the same except for the division operator so we won’t waste space here. Now that we can do some basic operations with vectors let’s examine the elements a vector has. First as you already know the vector represents a direction. We can find a direction like this:
    v = p2 ? p1

where p1 and p2 are two points that we would like to know the direction between. For example, we could find the direction of a car by taking two points along its route. The vector v will point from in a direction from p1 to p2.

Now we give the formula for the length of the vector, which is also called the magnitude of the vector. The length of the vector is the distance between the numerical origin and the direction point of the vector. Here is the formula:
    len = sqrt(v.x * v.x + v.y * v.y + v.z * v.z);

This is simply Pythagora's theorem extended into 3D. Note that this can actually be used to find the distance between two points.

Now we will get into the more complex vector operators. First of is an operation that is clearly useable, the dot product (or "inner" product):
    dot = v1.x * v2.x + v1.y * v2.y + v1.z * v2.z;

where v1 and v2 are two vectors. As you can see the dot product is not a vector but a scalar. But what's the meaning of this value?

The dot product is in fact the cosine of the angle between the two vectors:
    dot = cos(angle)

This has no practical use, but can help you understand how the dot product works. The next operation is the cross product which is given with this formula:
    new.x = v1.y * v2.z ? v2.y * v1.z
new.y = v2.x * v1.z ? v1.x * v2.z
new.z = v1.x * v2.y ? v2.x * v1.y

The cross product of two vectors is another vector, which is perpendicular to both source vectors. This means, the cross product will only work in R3 and above, which means in spaces with more than three real dimensions. We can't have a perpendicular vector to two vectors in 2D, can we :)

Weehee, this is getting hefty...

Now after all the real basics, we need some way to make everything we have gone through simple and fast. There is a way, with the help of "matrices". We won’t give a deep explanation of the matrices so if you are interested look at one of the links posted at the end of the article.

A matrix is simply an object with a given number of rows and columns where each cell holds a value. Let’s look at a simple example matrix:
    [ 1 0 ]
[ 0 1 ]

The matrix shown above has 2 rows and 2 columns. As you can see they are truly filled only with numbers. We say that the matrix above is a 2x2 matrix. Note that the number of rows and columns in a matrix could be different. For instance, we may have:
    [ 0 0 ]
[ 1 0 ]
[ 0 1 ]

Which is 3x2 matrix. In 3D applications, we normally use 4x4 matrices (3x3 can occur as well). What can we do with these matrices and why are they so useful? As with vectors, we'll first look at the addition of two matrices. It’s very easy actually and it’s done like this:
    [ a b ] + [ e f ] = [ a + e , b + f ]
[ c d ] [ g h ] [ c + g , d + h ]

You just have to add the corresponding elements from the operand matrices and place them at the corresponding positions in the resultant matrix. One important thing to notice is that the resultant matrix has the same dimensions (rows and columns) as the two operand matrices.

Time for subtraction which is the same as the addition but with a minus sign (you didn’t expected that, huh?):
    [ a b ] - [ e f ] = [ a ? e , b ? f ]
[ c d ] [ g h ] [ c ? g , d ? h ]

And at last let’s look at the multiplication which is a little bit harder. I will assume that you will require only NxN matrices (square matrices) and so the all you need to remember is:
    [ a b ] * [ e f ] = [ a * e + b * g , a * f + b * h ]
[ c d ] [ g h ] [ c * e + d * g , c * f + d * h ]

Generally, it's "the sum of matrix A's rows and matrix B's columns at the same index, where each cell is multiplied". The easiest way to show this is with some simple code:
    for (j = 0; j < size; j++)
{
for (i = 0; i < size; i++)
{
value = 0;
for (k = 0; k < size; k++)
value += A[j][k] * B[k][i];

result[j][i] = value;
}
}

NOTE! (A * B) is NOT equal to (B * A). Just put that somewhere in your head and don't let it loose. Forgetting this can cause some serious bugs. But what were these matrices all about? How do they relate to vertices? Funny you asked, we were just about getting there ;) To incorporate matrices to vertices, do this:
    R.x = v.x*m[0][0] + v.y*m[0][1] + v.z*m[0][2] + m[0][3];
R.y = v.x*m[1][0] + v.y*m[1][1] + v.z*m[1][2] + m[1][3];
R.z = v.x*m[2][0] + v.y*m[2][1] + v.z*m[2][2] + m[2][3];

This is actually a pure matrix multiplication. What we do, is we multiply a column matrix by a square matrix. If you want to look deeper into matrix multiplication, we suggest you take a look at the explanations offered by (1). In fact, they got notes for pretty much everything in this tutorial.

As you can see the result from the operation is a vector and not a matrix.

Further Reading



  • www.mathworld.com ? Here you can learn just everything there is to be learned about mathematics, physics and chemistry. There are deep explanations why stuff work with a lot of diagrams and etc. stuff that will help you to understand the fun and exciting world of mathematics.



원본 사이트 : http://www.devmaster.net/articles/software-rendering/part1.php

+ Recent posts