Tuesday, July 24, 2012

KriWeb project future

Introduction

KriWeb is my hobby 3D engine, the 4-th incarnation of KRI technology. I've been working on it for the last half a year in my spare time. Recently, I finished implementing the heart of the concept - shader composing pipeline.

Technology

In short, shader compositor was designed to decouple rendering technique code from the material and mesh modifiers. The material provides a set of functions to the pixel shader, which are used by the technique shader code. The technique knows how to apply an arbitrary stack of geometry modifiers (e.g. skeletal animation, morphing, displacement). without knowing anything about the actual modifiers used by the entity. The shader compositor assembles all these parts together in a linked shader program, that is associated with the entity.

As an example, we can imagine a material that provides a pure BRDF function. A technique knows about scene lights, and uses this BRDF to evaluate lights contribution to a surface point. An underlying mesh gets modified by, to say, a skeletal animation. These pieces of functionality will be glued together automatically to display a shiny animated object for you. While the Demo already shows it working, a better one could be made to harness the full power of shader compositing.

Future

Now it is time for me to evaluate the path I made, and to figure out the vector of progression for the nearest future. With all KRI incarnations (as with most of existing hobby engines), there was always a big issue chasing me - the lack of application. I was dodging it as I could, but in the end the engine dies without an application. I don't want to see KriWeb old and weak after several years of development. If it is to die, let it die young, and remember its technology shining brighter than the sun.

In other words, I don't want to continue the development of KriWeb until the real application is found. It may be either my own new project, or a cooperation with someone, but it has to be something good. It's not like I have a lot of free time now - working at Rockstar is pretty close to a dream job, and my skills are needed there in full while making the next big thing. Cheers!

Wednesday, April 18, 2012

Revolution in Game Development


Hardcore gamers have been struggling to see good games in the past 10 years. With each new release, each new title, or a demo, I've been looking with hope that it can be something incredible. But, generally, there wasn't a single great game, just a couple of good ones instead. Today, all big titles are targeted at soft-core audience, because it's easier to make and sells good. Fortunately, this is going to change in 2013, and the roots of the revolution are visible now.
2013 will be the beginning of the next golden age of gaming. The epicentre of the last one was around 1998, and I'm sure there was at least one more before it in 80th. The reason for games to change is the revolution in relationship between the developer, users, and the publisher on the way to extinction. The key concepts of the new era are digital distribution and crowd funding. This revolution is happening today, and the leaders have already shown up:
1. Steam (2002): the flagman of digital game distribution built by Valve. Steam helps PC developers to sell the game, and advertise it, without having a publisher. Steam has also shown us that games don't need to be so expensive, and the price can go down faster after the release, especially if the game turned out to be not as good as advertised.

2. Humble Bundle (2010): demonstrated the effectiveness of pay-what-you-want business model applied to indie games. Plus the fact that copyright protection is undesired: both Humble Bundle and GOG service provide only DRM-free content. An interesting discovery was that Linux/Unix users are ready to pay more than Windows gamers.

3. Minecraft (2009): an original game that became popular in the open alpha state. Minecraft was not the first, but it was the brightest and incredibly successful example of the game sponsored by the live user community. People realized that they can not only pay for existing games, but also influence the future by investing in the ideas they like.

4. KickStarter (2008): a portal that connects game developers with gamers, who are ready to invest their money. Millions of dollars are gathered around ambitious projects, exceeding developer expectations by a large factor. It is the final link in a chain that leaves no place for big fat publishers. Well, except for console games... for now.

I'm calling everyone to sponsor the games you would really enjoy! This revolution will make 2013 a wonderful year of games, which would be able to compete with veterans of 1998. For a complete picture, here are the games I'm proud to support:

Tuesday, March 27, 2012

A Perfect Game


Computer games are substantial part of my life. They bring new ideas, unique experience, and challenge my tactics and reflexes. I always think about qualities of game in general, and try to judge existing games upon these characteristics. There is a game that reached my heart, and I would like to tell you about it.
I played all genres, with RPG being the favorite. I adore Fallout 1/2, Arcanum, X-Com 1/2, Unreal-1, Planescape: Tourment, Baldur's Gate, Jagged Alliance, MechCommander, and other classics of 90th. Since that golden age the overall depth of the content was quickly decreasing, while the appearance was getting more and more detail. These titles are well respected by a limited community, but the subject is not among them. The subject came out shadowed by the titans, and having unique language, enormous system requirements, and little-to-no advertising, was left with no chance to shine.


The game Vangers was created by pure geniuses from russian K-D Lab studio. It features unique voxel-based terrain engine, novel-level futuristic story, and a gameplay mixed between RPG/action/simulation. The world you are literally thrown in lives by it's own laws (unlike XCom/JA/MC, where everything is user-centric). You are not special there (unlike Fallout/Arcanum/Planescape/BG), in fact there is a thousand of others, who are faster, stronger, and even able to reach the story goals before you. The world behaves as a living organism: try to leave the control for a second, and you will notice swarms of little creatures flying, swimming, crawling in the terrain; other vangers rushing in a race competition; global world cycle changing from winter to warm summer... The landscape is dynamically persistent through the game: once destroyed a bridge in a crazy fight - and you have to look for another way to cross the river. The role-playing is based on your very actions, not on some digits in your stats. On the one hand, you are absolutely free to do anything there, even the suicide can be your game ending. On the other hand, there is a strong story line that keeps you fully motivated to explore.


The game didn't get proper reception. Some people love it, some hate it, others don't understand, or simply never heard of this gem. It could make a perfect MMOG, but it already has various multi-player modes, and the games are still hosted. I enjoyed *playing* the story, because this is the game play in its perfect sense. From that moment, I started looking at the real world with eyes of a vanger.

Saturday, February 4, 2012

Report: the end of 3rd development age

Year 2012 started with a new vision over technologies I want to use in order to achieve the same old goals. The 3rd age of my personal projects lasted for 2.5 years and is over now. I'd like to make a small overview over what is going to left in the past, and what are my new friends in the nearest future.



Boo, the dinosaur of the old age.
The major fault of Boo for me was its immaturity. Imagine developing a killer feature in your project and then getting "unknown compiler exception" after you changed tons of code without being able to compile. Next thing you do is spending a day in narrowing down the issue, providing the test case for the bug, and then the next day trying to work around it, "temporarily". That's the time you could spent doing something important for you, not for the language creator and community. I tried to patch the imperfections of generics implementation with smart AST macroses, and it was indeed fun.

The other faults come from the Boo main platform - .Net/Mono. I've met some serious inconsistencies and ambiguities there. For example, OOP polymorphism can be achieved in two ways: using virtual methods or via implicit interface implementations. I had to use both, because this is how platform is set up.

Portability was the last major issue. While you could safely copy binaries on Linux/MacOS and try to execute them, this didn't work smooth in practice, not to mention that tablets/phones where completely out of scope. Errors about some library of some version not being found on a target platform drived me crazy.

I still like it and will continue to use it in a Unity-based project. Working with Boo was an important stage in my professional growth, but it's time to move forward now.


Dart + WebGL, the ultimate portability solution.
OpenGL is developing too quickly for me. It's difficult to constantly adapt to new features and redesign the system. WebGL is much more stable. It can potentially work on any platform, without any platform-specific client code. Back in C times I could provide sources for you to build them locally and execute. In Net/Mono/Java times I could provide binaries that are likely to just work. In Web times I just give you the link...

The 4th iteration of KRI engine will be developed from scratch to work on WebGL. Everyone would be able to instantly see the result of my efforts without setting up any development environment. I've got much more experience now, worked with 3 different mature engines. I have a clear understanding of the goals and principles upon which the new engine structure should be built. Besides, the philosophy of not doing anything heavy on CPU and generating as much as possible fits WebGL very nicely.

Dart comes as a perfect replacement for JavaScript here. Cleaner code, better OOP and FP integration, and finally the opportunity to be in the first wave of new WebGL applications. I can't imagine doing anything serious with JS, and I like Dart very much so far.


Haskell, the global paradigm shift.
Meeting Functional Programming changed my mind, and I'll never be the same again. No, I'm not going to stop using imperative languages, but the way I look at the code is very different today. Haskell experience helped me to develop a vision of really clean and error-less code. For each piece of logic now I prefer to know exactly the input data and the result, removing all implicit and hidden flows. For example, any singleton or a static piece of data I consider dangerous.

Haskell is now the language of my AI experiments. While still learning, now I think on a higher level of abstraction, which simplifies the development and allows to concentrate on ideas more than on tools.


Conclusion.
Learning new principles is very beneficial. It's not just about new abilities, it's also about looking at old things under very different perspective. I've had a lot of fun with OpenGL-3 and Boo, they were a real step forward comparing to the my C/C++ only 2nd age. But even more fun waits ahead. New development principles, that I'll cary through the 4th age, are very promising, and I'll do my best to realize their potential in full.

Monday, January 9, 2012

Functional thinking

I started reading a book "Learn You a Haskell". It is truly wonderful, explaining complex things in a simple friendly way. Haskell itself seems to be the functional language. It's pureness makes you feel writing math  lemmas and theorems in a formal language, it is a very different feeling from C++/Boo I used to program in.

Surprisingly enough, functional programming makes us care about the goal, or a shape of the result. You answer questions like "What should it look like? What does it consist of?". At the same time, imperative programming involves asking the question "How?" most of the time.

As a first milestone and a practical task in learning Haskell I wrote the Burrows-Wheeler Transformation (BWT). I never thought it could be implemented in just 10 lines (not counting qsort), and remain nicely readable after that:

qsort :: (Ord a) => [a] -> [a]
qsort [] = []
qsort (x:xs) = qsort left ++ [x] ++ qsort right
where left = filter (<=x) xs
right = filter (>x) xs
bwt :: String -> (Int,String)
bwt input =
let buildMx [] _ = []
buildMx (x:xs) ch = (x:xs,ch) : buildMx xs x
mx = buildMx input (last input)
sorted = qsort mx
output = map snd sorted
base = head [ i | (i,(s,_))<-zip [0..] sorted, s==input ]
in (base,output)

Friday, January 6, 2012

Suffix sorting in linear time with no extra space

Intro
Suffix array construction (or, more generally, suffix sorting) is an important task of finding the lexical order of all sub-strings of a given string. It is heavily used in data indexing and BWT compression, which I've been doing for a long time to date.
First of all, it was surprising to me to discover that the subject is possible. By linear time I mean O(N) asymptotic execution time, where N is the input array size. By no extra space I mean that the only memory required is for the input array (N bytes), suffix array (4N bytes) and some constant storage (O(1) bytes).

Induction sort (SA-IS) is the key. Authors did a great work of exploring it. The major breakthrough was to use the induction to sort LMS sub-strings, in addition to using it afterwards to recover the order of all other strings. The only thing missing was a proper memory requirement for the algorithm. Authors claim 2N is the worst-case additional memory. Let's see how we can narrow this value down.

Proof
For an input string of size N, let's define M to be the total number of LMS-type suffixes. The memory (R(N,M) machine words) required for a recursive call consists of three sections:

  • Suffix storage = M
  • Data storage = M
  • Radix storage = K(M), where K(M) is the number of unique LMS sub-strings

R(N,M) = M + M + K(M) <= 3M/2

Now, let's split LMS sub-strings according to their length. There can be LMS of size 1. There are L2 of size 2, L3 of size 3, and so on. Let's now define M and N in terms of these numbers:
M = Sum(i){ Li }
N = Sum(i){ i * Li }

We know that LMS of different size can not be equal. Therefore, we can safely split K(M):
K(M) = Sum(i){ K(Li) } = K(L2) + K(L3) + ... + K(Li) + ...
K(Li) <= Li

We know that the maximum number of unique sub-strings of size 2 can not exceed 2^16, which is a rather small number. For the convenience let's name a=L2 and b=L3+L4+...= Sum(i>2){ Li }. It is the time to give a better upper bound to R(N,M):
M = a+b
N = 2L2 + 3L3 + 4L4 + ... <= 2L2 + 3(L2+L3+...) = 2a + 3b
R(N,M) = 2M + K(M) = 2a+2b + K(a) + K(b) <= 2a+2b + 2^16 + b = 2a+3b + 2^16 <= N+O(1)

Well, that's it. By carefully arranging the data in memory and allocating just 2^16 additional words we can perform a recursive call to SA-IS, and, therefore, construct a complete suffix array.

Code
My version of SA-IS is called A7. Advantages to Yuta's SA-IS implementation (v2.4.1):
  • 5N is the worst-case memory requirement. It doesn't depend on the input.
  • No memory allocation/free calls inside the sorting procedure.
  • Data squeezing: using smaller data types for recursion if range allows.
  • Cleaner C++ code with more comments.

Monday, October 17, 2011

AI concepts mind dump

I've been thinking about AI structure for at least 7 years already. Thinking a bit and not too often, but steadily advancing. There is an open-source project where I'm doing all the practice, but today I would just like to dump the theory, giving my mind the more free space to advance.

Agent - a subject of AI, has a mind in a form of a neural network (NN).

In a time-step-based approximation an agent takes input from its receptors. The input is going through the NN producing some charge on the actor neurons. One of the actors is chosen each time step to be performed. The result of the action is spread back through NN, adjusting and optimizing the network structure.

Neuron - a base element of the NN, has:

  • A list of pointers to output neurons (axons).
  • Current charge and response.
  • A memory map of stamp->charge.
Weights.
In a classical NN model inputs have different weights. This can be achieved in a uniform weight model (all inputs/outputs of a neuron are equal) if you start adding similar neurons with inputs being a subset of the original neuron inputs. Therefore, I'm going to throw weights off and take each input/output equally strong, distributing the signal uniformly.

Adaptation.
One of the experiments showed me that you can not go on with a static NN structure. You have to add, remove links and other neurons at least logically (one might say that biologically nothing new is created during the thinking  process, but I don't have any proof for that and I don't need it). Therefore, there is no point of adjusting the input weights by the agent adaptation process - pure neuron/link creation/removing heuristic would produce the same result and is required anyway.

Signal.
A neuron is not a function - it's just a multiplexer and a transmitter of the signal (which can me mathematically expressed as a function). Therefore, it has to preserve the total signal strength from inputs to outputs, taking some tiny portion for its efforts.

Memory.
Memory of an agent as a whole is constructed from the charges in the past stored within each neuron. A past charge does not have a time reference, but the mind can scan all neurons charges at once at some particular time stamp. It is still an open question of how these past charges participate in the decision process on an agent.