I rarely talk about anime on this blog, sticking more to gaming when the mood strikes me to discuss my hobbies. But today, I'm changing it up a bit. I thought I'd come up with a list of anime shows I have been watching that contain conservative themes, and try to analyze them a bit. Maybe this will also help some of my friends of a leftward bent to better understand what exactly conservatism is -- what it stands for, and what it's all about.
In no particular order, I'm going to start with Ginga Eiyuu Densetsu. (Legend of the Galactic Heroes). This is a series that is one hundred and ten episodes of glorious battles in space, between enormous fleets of ships, all to the music of great classical composers like Wagner and Dvorak. For an example, here's part of episode 15 :
Gotta love the music, right? First of all, let me give you a background summary. The two sides whose conflict serves as the background for everything that goes on are as follows: On one side (which vaguely resembles 50s America in style and dress) we have the Free Planets Alliance. The show doesn't delve in depth into the details of their government, but it is nominally a democracy, led by a small cabinet staffed by fairly corrupt politicians whose primary concern is getting re-elected, rather than achieving any kind of lasting peace out of the 150 year+ war they've been waging with the Galactic Empire. The Empire (marked by their 18th century dress) is basically feudal, with a Kaiser reigning over several planets, each of which is governed by a noble. The Alliance is formed out of emigrants who fled the Empire some years after it became a feudal state (it too had been something of a democracy). The show covers a few generations, and I haven't actually gotten very far along in watching it (somewhere near episode 34 or 35 at the moment), but the primary characters at the start are a rising general in the FPA named Yang Wen-Li and a rising general in the empire named Reinhard von Lohengramm.
I don't want to spoil too much of the show, so let's cover the basic reasons why it's on this list. The show repeatedly compares the two nations against each other, with the intended result, I think, of making the viewer think both are pretty bad and we need something that has elements of both. Specifically, the FPA becomes more and more fascistic/socialist as time goes on, as the politician that starts out as secretary of defense gains more power, and shows himself as being highly charismatic (ironic that he becomes more hitler-like than the Empire characters, who are clearly modelled on pre-WWI Prussia/Germany). The FPA is also routinely plagued with social disorder in the form of anti-war protests, terrorism, and at one point, civil war after a military coup. (In some ways the way the military of the FPA works is more akin to Rome...but I digress). As for the empire, the factionalism between different nobles gets worse on the Kaiser's death, and civil war also breaks out over there, though social order is largely unharmed during and after the civil war. So basically, as I said, both systems of government get some analysis, and it seems like the end result is that there are pros and cons to both, rather than democracy being upheld as intrinsically better. In turn, this suggests that something like a constitutional monarchy or the American republic is best (with a mix of democracy and oligarchy). Naturally, I think this is a fairly conservative theme in itself.
Aside from that, Yang is something of a pacifist, but is constantly jarred from his naivety by the situations around him and forced to fight. So we get treated to some discussions that lead to a moral similar to Heinlein's "violence is an answer" speech from Starship Troopers... where the idealist must abandon his pacifism and take up something more pragmatic. There's also a salvo against public education (to some extent) in that Yang is as good a general as he is because he spent time reading military history, rather than sticking with what he was 'supposed' to be doing at his school. And he's basically homeschooling his ward. (Who later becomes something of a badass general himself).
Thursday, June 28, 2012
Thursday, June 2, 2011
Are Computers Isomorphic to Humans?
Let me preface this essay by admitting that artificial intelligence is an old problem, and acknowledging that better minds have attacked the question before me (and certainly better minds will come after). Hence I can in no way expect to resolve the query at hand; only to jot down a few thoughts on the matter, seeking to move myself a bit closer to understanding truths which may ultimately be beyond human reach. Perhaps it is just as well that most “objective truth” is, at the end of the day, of this nature – for it may well be that we find meaning only by searching for it. Since I have probably crushed my own credibility enough on the matter at hand, 'tis high time to move on to actually getting lost in the fog, eh?
No discussion of artificial intelligence would be complete without mentioning the ill-fated Alan Turing. His test, designed well before the computer era, is nonetheless still one of the most respected processes for determining whether or not a computer has “intelligence.” If you are not familiar with the Turing Test I will try and concisely explain it: Given a judge deciphering typewritten messages from two subjects (one a computer, one a human), have the judge attempt to distinguish which subject is the computer, and which the human. Supposedly, if the judge guesses wrong about half the time or more, we can at that point say that computers have intelligence or are close enough. Now granted, this idea for a test is not particularly novel (when one thinks of proving a thing sentient, what could be more natural than to compare the thing against himself?), but it has served as a good base to build thought experiments from. And, hey, it has become something of a tradition to hold such tests as a sideshow at computer conventions, especially those involving artificial intelligence. Anyway, let us move on to discussing some of the theoretical conclusions arrived at in attempting to build machines that could pass the test, as well as some of the problems inherent to the approach.
First and foremost, the kind of answers a computer retorts back to user input hypothetically considered by Turing are pretty much as relevant today as they were in Turing's time. Certainly, a subject having the capacity to do arithmetic correctly at lightning speeds would arouse suspicions that said subject was of more metallic origins. Yet, as Turing pointed out, and as anyone who is familiar with programming “easier” settings for video games could tell you: it is quite simple to program a computer to occasionally give wrong answers, and to wait any given amount of time before replying. So clearly, making a computer behave “as poorly” at math as your typical human is not particularly difficult – and in learning this we have largely set such behaviors aside as being irrelevant to determining how intelligent computers really are. Indeed, we were probably barking up the wrong tree from the start by trying to define intelligence as the ability to make mistakes. Or perhaps I should say “appear to make mistakes.” More on that in a bit.
For now, let us consider how a computer might answer a question of aesthetic beauty. Suppose you showed the computer a painting and asked if it were beautiful or not. What kind of answer would give the computer away? To paraphrase the author of the main text we have been reading in this class (Godel, Escher, Bach by Douglas Hofstadter), does the computer have a large enough soul to appreciate Bach? We could simply give the computer a learning algorithm (such as a Support Vector Machine or a Neural Network) and a ton of examples of various types of music and artwork, letting it form itself a rough aesthetic scale. The computer could then turn around and spit out an answer about where on the scale a given piece of art fell, and have it be a rough enough consideration to pass muster with our Turing Test judge. (For a rough estimate is all humans really give about such a nebulous thing as “beauty” anyway). What other foils must a computer surmount to pass a Turing Test?
How about language and communication? Surely a computer that gave responses which sound repetitive or canned would give itself away. Or, alternatively, a computer that could not piece together the intended meaning of a given phrase could never return to the user some useful commentary on the phrase; nevermind a meaningful response to the message. Yet, assuming our judge has an infinite amount of time to keep asking questions (English providing a framework to form an infinite number of different meaningful sentences), it stands to reason that a computer can only have a finite number of potential responses. And, while a meaningful sentence-forming algorithm seems possible hypothetically, humanity has yet to produce one that actually works well enough to imagine it fooling anyone for very long. Likewise, we have yet to perfect a parsing algorithm that can be guaranteed to parse any and every meaningful sentence, which is best illustrated by the fact that we still code in “Do What I Say” mode instead of “Do What I Mean” mode. Which is to say that programming languages are still not in the same category as natural language is.
Now, one could write really complex code such as what Mr. Hofstadter demonstrated in his dialogue between the MIT graduate student and his pet AI. But, I submit that even such code as that would have problems dealing with an infinite number of different queries from a judge. For a more in-depth example, consider compilers. Compilers require their input to fit specific syntactic rules, along with a small number of semantic constraints. The only way to guarantee a compiler can interpret and take appropriate action on any given string is if the compiler is allowed to “lookahead” towards the next 'symbol' in the given string, an infinite number of times. But computers cannot do this, because all possible implementations will have limited amounts of lookahead possible. At the very least, I cannot say I have seen an algorithm capable of accomplishing either interpretation or generation of an infinite number of different meaningful phrases. Or more simply, there is no algorithm that can keep it up indefinitely. At some point the computer will come up with an unnatural enough sentence or phrase, or will respond in a completely unexpected way to a misinterpreted sentence, that it will be found out by our judge.
Of course, all this discussion might be quite moot, because much of what we just asked a computer to do, a human may have just as much trouble doing. Does the idiom “lost in translation” ring a bell? Even when two perfectly healthy humans are speaking the same language, with words both know very well, it is not so uncommon to encounter misunderstandings and barriers to communication of meaning and intent over completely human means of discussion. So, if we are trying to make computers human-like, why even bother with considering linguistic operations past some decently approximate level? Perhaps we have lost sight of our goal: To write an artificial intelligence capable of fooling a human into believing that the artificial intelligence is not artificial after all. Well, sure enough, we can imagine a finite set of human-tailored responses the computer could use to fool a human over a short term, and even something more along the lines of a learning program might be used to good effect if the judge queries only within a particular subset of language. But on the other hand, if we remove the constraints upon the judge's time and number of queries, it is virtually impossible to indefinitely fool the judge into believing the computer responses and requests might have arisen from a human.
So, perhaps you were wondering when and if I would ever pick back up that thread about the “appearance” of a computer making mistakes. Well, I suppose I have kept you waiting long enough. It is my opinion that, while trying to satisfy the Turing test has steered computer science in the direction of building better and better artificial intelligence in the truest sense of artificial, it is quite a pointless venture to begin a journey towards creating intelligence by trying to “fool” a human into thinking a computer is a human. In fact, if our goal is to produce intelligence equivalent to what we have as humans, we should be far more concerned with the implementation than anything it actually does. Who cares if this electrical brain can play checkers; is it self-conscious? Does it make decisions in a non-deterministic way? Does it have a will?
But here we have encountered a bit of circular logic. For, here we are asking if computers can perform in a way we are not actually sure humans do. We need to ask if thought really is just a higher level representation of neurons firings, and not something more. Without the notion of free will, all behavior is simply a set of programmed responses to the environment, based upon some unique combination of genes and how our learning algorithms adapted themselves to our environments over time. In such a case the Turing test would actually be quite appropriate, because our own sense of ourselves as free-willed and non-deterministic beings would be utterly delusional. If we accept our own sentience in these terms, we can quite easily answer our original query (whether humans and computers are isomorphic) in the affirmative, but can we reason our way to such a conclusion? Perhaps, in the same way Escher's Dragon cannot actually become three-dimensional, and analogous to the way Godel proved any formal system sufficiently strong enough to reason about itself must always be incomplete (“incomplete” meaning there are well-formed propositions the system can make which cannot be decided within the system), we humans cannot find our way to a decision about whether we are sentient using a system of predicate logic. Granted, we cannot even prove that the aforementioned question is undecidable at this juncture, but I could hardly do justice to you the reader if I left us wandering through the fog without expressing any kind of solid ground to stand upon. So humor me in accepting the question as currently undecidable.
Now we are allowed to do one of three things. We could leave the question as undecidable, and appreciate the zen of it all from a distance. We could accept that humans have sentience in the frame of context we have been discussing (having free will, among other concepts intentionally left somewhat nebulous). Lastly, we could accept the contrary notion, that humans are not sentient. I am going to ignore the first option because taking that route would leave us adrift in the same murky waters we were in before accepting the question was undecidable. Given that the immediate consequences of accepting the notion that humans are not sentient are quite depressing (essentially it would mean accepting a completely deterministic world, where any notions of individuality, achievement, right and wrong, etc are wholly delusional), I think I will hold off on accepting that conclusion. Where does this leave us? Accepting that humans have a brand of intelligence that includes free will. If we accept that, then we have pretty much answered our fundamental query in the negative, because computers and whatever output we get from them are ultimately limited to how we interpret their electrical signals; by definition this puts computers in an entirely different category from humanity.
Labels:
Computer Science,
Mathematics,
philosophy
Wednesday, May 11, 2011
Dev Diary
Spent most of this last week switching between work on developing a Check() hierarchy for validating xml importation, and a minesweeper app for my android app programming class. The former project I can't discuss too much, only to say that I need to learn how to work with registry design patterns, and am not quite clear where the hierarchy is going to go. The latter I am finding much more difficult than it seemed at first.
Apparently, most everyone is treating the actual setup of the boxes manually instead of programattically, putting their positioning information into the xml by hand. The way I envision it is more automated (but in turn, more difficult to figure out how to do), with the number and size of the boxes adapting to user input.
Apparently, most everyone is treating the actual setup of the boxes manually instead of programattically, putting their positioning information into the xml by hand. The way I envision it is more automated (but in turn, more difficult to figure out how to do), with the number and size of the boxes adapting to user input.
Labels:
Computer Science,
CS,
Dev Diaries,
Java
Wednesday, May 4, 2011
Dev Diary May 4th, 2011
So, I got a job working as an intern working for a company by the name of Data In Motion as of this last Monday. Not much to say about it as of yet except that there's a ton to the job, and I am currently... whelmed at the tasks ahead. That is to say, it's certainly daunting, but I think if I take it one piece at a time (like one of my favorite Johnny Cash songs) I'll get to understanding it, and be able to make myself useful to my employers. For the time being, they are doing a lot more for me than I am for them, and it is currently just a hope to be a good return on that investment.
All that having been said, I probably won't be sharing my code from work-related projects here on my blog, unless given explicit permission to do so. I am also starting a class for developing android applications, and may share some of the toy programs from that class (albeit if it looks even marginally profitable I'd be keeping mum). So that's the situation if you were wondering when my next interesting tidbit of code would appear.
And now I'm off to spend the rest of the evening getting used to some new equipment. And to read up on marker, visitor, and singleton design patterns. On a complete sidenote, I should remind myself to strongly suggest to my professors here at Westminster to implement a class focused on design patterns.
All that having been said, I probably won't be sharing my code from work-related projects here on my blog, unless given explicit permission to do so. I am also starting a class for developing android applications, and may share some of the toy programs from that class (albeit if it looks even marginally profitable I'd be keeping mum). So that's the situation if you were wondering when my next interesting tidbit of code would appear.
And now I'm off to spend the rest of the evening getting used to some new equipment. And to read up on marker, visitor, and singleton design patterns. On a complete sidenote, I should remind myself to strongly suggest to my professors here at Westminster to implement a class focused on design patterns.
Labels:
Computer Science,
CS,
Dev Diaries,
Java
Thursday, April 28, 2011
SVM Tester
function [score iter correct falseNeg falsePos unsures] = TestVectorMachine( weight, kernel, labels, bias )
%Test function for vector machien -- takes in a set of training data and
%its labels and tests a given set of alphas and a bias against it
[xn xm] = size(kernel);
[yn ym] = size(labels);
[an am] = size(weight);
%if(xn ~= am || ym ~= xn)
% display('Sorry, this is an idiot proof function. Try again!');
% return;
%end
falseNeg = 0;
falsePos = 0;
unsures = 0;
iter = 0;
for i = 1:xn
fXi = (weight .* labels) * kernel(i,:) + bias;
if (fXi * labels(i)) <= 0
if(fXi > 0)
falsePos = falsePos + 1;
end
if(fXi < 0)
falseNeg = falseNeg + 1;
end
if(fXi == 0)
unsures = unsures + 1;
end
iter = iter + 1;
end
end
score = (xn - iter) / xn * 100;
correct = xm - iter;
%Test function for vector machien -- takes in a set of training data and
%its labels and tests a given set of alphas and a bias against it
[xn xm] = size(kernel);
[yn ym] = size(labels);
[an am] = size(weight);
%if(xn ~= am || ym ~= xn)
% display('Sorry, this is an idiot proof function. Try again!');
% return;
%end
falseNeg = 0;
falsePos = 0;
unsures = 0;
iter = 0;
for i = 1:xn
fXi = (weight .* labels) * kernel(i,:) + bias;
if (fXi * labels(i)) <= 0
if(fXi > 0)
falsePos = falsePos + 1;
end
if(fXi < 0)
falseNeg = falseNeg + 1;
end
if(fXi == 0)
unsures = unsures + 1;
end
iter = iter + 1;
end
end
score = (xn - iter) / xn * 100;
correct = xm - iter;
Labels:
Computer Science,
CS,
Mathematics,
MATLAB
Soft Margin one-norm SVM
function [ weights bias ] = TannSchmidVectorMachineSoftMarginUno( K, H, labels, C)
%Toggle details which kernel we use
[xm xn] = size(K);
[ym yn] = size(labels);
%scale C down
C = C / xm;
%check to make sure training & labels have same dimension and toggle is
%valid
if xm ~= ym
display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
return;
end
%allocate space for different parts
f = zeros(xm, 1);
A = zeros(2 * xm + 4, xm);
b = zeros(2 * xm + 4, 1);
%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
A(3,:) = ones(1, xm);
A(4,:) = -ones(1, xm);
for i = 1:xm
A(i+4, i) = 1;
end
for i = 1:xm
A(i+4+xm, i) = -1;
end
b = [0; 0; 1; -1; (C - 10^(-7)) * ones(xm,1); zeros(xm, 1)];
[weights v] = quadprog(H, f, A, b);
%find the bias
bias = GetSoftBiasUno(weights, K, labels, C);
bias = bias / sqrt(weights' * H * weights);
save('recordedResults0', 'weights', 'bias', 'K');
%Toggle details which kernel we use
[xm xn] = size(K);
[ym yn] = size(labels);
%scale C down
C = C / xm;
%check to make sure training & labels have same dimension and toggle is
%valid
if xm ~= ym
display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
return;
end
%allocate space for different parts
f = zeros(xm, 1);
A = zeros(2 * xm + 4, xm);
b = zeros(2 * xm + 4, 1);
%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
A(3,:) = ones(1, xm);
A(4,:) = -ones(1, xm);
for i = 1:xm
A(i+4, i) = 1;
end
for i = 1:xm
A(i+4+xm, i) = -1;
end
b = [0; 0; 1; -1; (C - 10^(-7)) * ones(xm,1); zeros(xm, 1)];
[weights v] = quadprog(H, f, A, b);
%find the bias
bias = GetSoftBiasUno(weights, K, labels, C);
bias = bias / sqrt(weights' * H * weights);
save('recordedResults0', 'weights', 'bias', 'K');
Labels:
Computer Science,
CS,
Mathematics,
MATLAB
Soft Margin 2-norm SVM
function [ weights bias ] = TannSchmidVectorMachineSoftMarginDos( K, H, labels, C)
%Toggle details which kernel we use
[xm xn] = size(K);
[ym yn] = size(labels);
%scale C down
C = C / xm;
H = (1/2) * (H + (1 / C) * eye(xm, xn));
%check to make sure training & labels have same dimension and toggle is
%valid
if xm ~= ym
display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
return;
end
%allocate space for different parts
f = zeros(xm, 1);
A = zeros(xm + 4, xm);
b = zeros(xm + 4, 1);
%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
A(3,:) = ones(1, xm);
A(4,:) = -ones(1, xm);
for i = 1:xm
A(i+4, i) = -1;
end
b = [0; 0; 1; -1; zeros(xm, 1)];
[weights v] = quadprog(H, f, A, b);
%find the bias
bias = GetSoftBiasDos(weights, K, labels, C);
bias = bias / sqrt(weights' * H * weights);
save('recordedResults0', 'weights', 'bias', 'K');
%Toggle details which kernel we use
[xm xn] = size(K);
[ym yn] = size(labels);
%scale C down
C = C / xm;
H = (1/2) * (H + (1 / C) * eye(xm, xn));
%check to make sure training & labels have same dimension and toggle is
%valid
if xm ~= ym
display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
return;
end
%allocate space for different parts
f = zeros(xm, 1);
A = zeros(xm + 4, xm);
b = zeros(xm + 4, 1);
%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
A(3,:) = ones(1, xm);
A(4,:) = -ones(1, xm);
for i = 1:xm
A(i+4, i) = -1;
end
b = [0; 0; 1; -1; zeros(xm, 1)];
[weights v] = quadprog(H, f, A, b);
%find the bias
bias = GetSoftBiasDos(weights, K, labels, C);
bias = bias / sqrt(weights' * H * weights);
save('recordedResults0', 'weights', 'bias', 'K');
Labels:
Computer Science,
CS,
Mathematics,
MATLAB
Hard Margin one-norm SVM
function [ weights bias ] = TannSchmidVectorMachineHardMarginUno( K, H, labels)
%Toggle details which kernel we use
[xm xn] = size(K);
[ym yn] = size(labels);
%check to make sure training & labels have same dimension and toggle is
%valid
if xm ~= ym
display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
return;
end
%allocate space for different parts
f = -ones(xm, 1);
A = zeros(xm +2, xm);
bias = zeros(xm +2, 1);
%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
for i = 1:xm
A(i+2, i) = -1;
end
[weights v] = quadprog(H, f, A, bias);
%find the bias
bias = getHardMarginBias(weights, K, labels);
save('recordedResults0', 'weights', 'bias', 'K');
%Toggle details which kernel we use
[xm xn] = size(K);
[ym yn] = size(labels);
%check to make sure training & labels have same dimension and toggle is
%valid
if xm ~= ym
display('Sorry, this is an idiot proof function. Try feeding in valid parameters next time, doof!');
return;
end
%allocate space for different parts
f = -ones(xm, 1);
A = zeros(xm +2, xm);
bias = zeros(xm +2, 1);
%build constraints matrix
A(1,:) = labels';
A(2,:) = -labels';
for i = 1:xm
A(i+2, i) = -1;
end
[weights v] = quadprog(H, f, A, bias);
%find the bias
bias = getHardMarginBias(weights, K, labels);
save('recordedResults0', 'weights', 'bias', 'K');
Labels:
Computer Science,
CS,
Mathematics,
MATLAB
Soft Margin one-norm Bias Calculator
function [ bias ] = GetSoftBiasUno( weights, kernel, labels, C)
[xm xn] = size(kernel);
counter = 0;
bias = 0;
for i = 1:xm
if weights(i) > (10^-10) && weights(i) < (C - 10^(-10)) %calculate first <w xi>
sgnLastY = labels(i) > 0;
partialSum = 0;
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%reset partial sum
wXi = partialSum;
partialSum = 0;
for j = i:xm
if weights(i) > (10^-10) && weights(i) < (C - 10^(-10)) && sgnLastY ~= (labels(j) > 0)
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%save second <w xj>
wXj = partialSum;
bias = bias + -(wXi + wXj) / 2;
counter = counter + 1;
end
end
end
end
bias = bias / counter;
[xm xn] = size(kernel);
counter = 0;
bias = 0;
for i = 1:xm
if weights(i) > (10^-10) && weights(i) < (C - 10^(-10)) %calculate first <w xi>
sgnLastY = labels(i) > 0;
partialSum = 0;
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%reset partial sum
wXi = partialSum;
partialSum = 0;
for j = i:xm
if weights(i) > (10^-10) && weights(i) < (C - 10^(-10)) && sgnLastY ~= (labels(j) > 0)
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%save second <w xj>
wXj = partialSum;
bias = bias + -(wXi + wXj) / 2;
counter = counter + 1;
end
end
end
end
bias = bias / counter;
Labels:
Computer Science,
CS,
Mathematics,
MATLAB
Soft-Margin 2-norm Bias Calculator
function [ bias ] = GetSoftBiasDos( weights, kernel, labels, C)
[xm xn] = size(kernel);
counter = 0;
bias = 0;
for i = 1:xm
if weights(i) > (10^-10) %calculate first <w xi>
sgnLastY = labels(i) > 0;
partialSum = 0;
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%reset partial sum
wXi = partialSum;
partialSum = 0;
for j = i:xm
if weights(i) > (10^-10) && sgnLastY ~= (labels(j) > 0)
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%save second <w xj>
wXj = partialSum;
bias = bias + -(wXi + wXj) / 2 - (labels(i) * weights(i) - labels(j) * weights(j)) / (2 * C);
counter = counter + 1;
end
end
end
end
bias = bias / counter;
[xm xn] = size(kernel);
counter = 0;
bias = 0;
for i = 1:xm
if weights(i) > (10^-10) %calculate first <w xi>
sgnLastY = labels(i) > 0;
partialSum = 0;
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%reset partial sum
wXi = partialSum;
partialSum = 0;
for j = i:xm
if weights(i) > (10^-10) && sgnLastY ~= (labels(j) > 0)
for j = 1:xm
partialSum = partialSum + labels(j) * kernel(i,j) * weights(j);
end
%save second <w xj>
wXj = partialSum;
bias = bias + -(wXi + wXj) / 2 - (labels(i) * weights(i) - labels(j) * weights(j)) / (2 * C);
counter = counter + 1;
end
end
end
end
bias = bias / counter;
Labels:
Computer Science,
CS,
Mathematics,
MATLAB
Subscribe to:
Posts (Atom)