How to optimally adjust the probabilities for the random IFS algorithm?
$begingroup$
Most fractals can be seen as attractors of a given set of affine transformations ${T_1,cdots,T_N }$. There are different ways one can generate a fractal by using this information. The most two common methods are the Deterministic IFS algorithm and the Random IFS algorithm.
The Random IFS Algorithm is a generalization of the Chaos game and essentially works as follows:
Determine the affine transformations $T_1,cdots,T_N$ that characterize the fractal, and given an initial point $P_0$, iteratively compute
$$
P_n= T_i(P_{n-1}),
$$
where $i$ is randomly and uniformly chosen among the set ${1,cdots,N }$.
This is quite a slow process, but it can be improved by adjusting the probabilities $p(T_i)$ of choosing transformation $T_i$ at each iteration. More specifically, it is known that the following probabilities are very efficient to speed up convergence:
$$
p(T_i) = frac{det(M_{T_i})}{sum_{j=1}^Ndet(M_{T_j})} tag{1}
$$
Here, $M_{T_i}$ denotes the matrix of transformation $T_i$, and $det$ is the determinant operator of a matrix. Roughly speaking, this guarantees $p(T_i)$ to be the fraction of the fractal $F$ occupied by $T_i(F)$.
My question(s):
Is there a better strategy ? Is there an optimal one? Is this an open problem ?
probability algorithms fractals
$endgroup$
add a comment |
$begingroup$
Most fractals can be seen as attractors of a given set of affine transformations ${T_1,cdots,T_N }$. There are different ways one can generate a fractal by using this information. The most two common methods are the Deterministic IFS algorithm and the Random IFS algorithm.
The Random IFS Algorithm is a generalization of the Chaos game and essentially works as follows:
Determine the affine transformations $T_1,cdots,T_N$ that characterize the fractal, and given an initial point $P_0$, iteratively compute
$$
P_n= T_i(P_{n-1}),
$$
where $i$ is randomly and uniformly chosen among the set ${1,cdots,N }$.
This is quite a slow process, but it can be improved by adjusting the probabilities $p(T_i)$ of choosing transformation $T_i$ at each iteration. More specifically, it is known that the following probabilities are very efficient to speed up convergence:
$$
p(T_i) = frac{det(M_{T_i})}{sum_{j=1}^Ndet(M_{T_j})} tag{1}
$$
Here, $M_{T_i}$ denotes the matrix of transformation $T_i$, and $det$ is the determinant operator of a matrix. Roughly speaking, this guarantees $p(T_i)$ to be the fraction of the fractal $F$ occupied by $T_i(F)$.
My question(s):
Is there a better strategy ? Is there an optimal one? Is this an open problem ?
probability algorithms fractals
$endgroup$
$begingroup$
On behalf of @Andres Iglesias: Here, you can find the reference that answers your question at full extent: J.M. Gutiérrez, A. Iglesias, M.A. Rodríguez: "A multifractal analysis of IFSP invariant measures with application to fractal image generation". Fractals, vol. 4, Issue 1, pp. 17-27 (1996). Last sentence of the abstract reads: Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.
$endgroup$
– dantopa
Dec 8 '18 at 0:01
add a comment |
$begingroup$
Most fractals can be seen as attractors of a given set of affine transformations ${T_1,cdots,T_N }$. There are different ways one can generate a fractal by using this information. The most two common methods are the Deterministic IFS algorithm and the Random IFS algorithm.
The Random IFS Algorithm is a generalization of the Chaos game and essentially works as follows:
Determine the affine transformations $T_1,cdots,T_N$ that characterize the fractal, and given an initial point $P_0$, iteratively compute
$$
P_n= T_i(P_{n-1}),
$$
where $i$ is randomly and uniformly chosen among the set ${1,cdots,N }$.
This is quite a slow process, but it can be improved by adjusting the probabilities $p(T_i)$ of choosing transformation $T_i$ at each iteration. More specifically, it is known that the following probabilities are very efficient to speed up convergence:
$$
p(T_i) = frac{det(M_{T_i})}{sum_{j=1}^Ndet(M_{T_j})} tag{1}
$$
Here, $M_{T_i}$ denotes the matrix of transformation $T_i$, and $det$ is the determinant operator of a matrix. Roughly speaking, this guarantees $p(T_i)$ to be the fraction of the fractal $F$ occupied by $T_i(F)$.
My question(s):
Is there a better strategy ? Is there an optimal one? Is this an open problem ?
probability algorithms fractals
$endgroup$
Most fractals can be seen as attractors of a given set of affine transformations ${T_1,cdots,T_N }$. There are different ways one can generate a fractal by using this information. The most two common methods are the Deterministic IFS algorithm and the Random IFS algorithm.
The Random IFS Algorithm is a generalization of the Chaos game and essentially works as follows:
Determine the affine transformations $T_1,cdots,T_N$ that characterize the fractal, and given an initial point $P_0$, iteratively compute
$$
P_n= T_i(P_{n-1}),
$$
where $i$ is randomly and uniformly chosen among the set ${1,cdots,N }$.
This is quite a slow process, but it can be improved by adjusting the probabilities $p(T_i)$ of choosing transformation $T_i$ at each iteration. More specifically, it is known that the following probabilities are very efficient to speed up convergence:
$$
p(T_i) = frac{det(M_{T_i})}{sum_{j=1}^Ndet(M_{T_j})} tag{1}
$$
Here, $M_{T_i}$ denotes the matrix of transformation $T_i$, and $det$ is the determinant operator of a matrix. Roughly speaking, this guarantees $p(T_i)$ to be the fraction of the fractal $F$ occupied by $T_i(F)$.
My question(s):
Is there a better strategy ? Is there an optimal one? Is this an open problem ?
probability algorithms fractals
probability algorithms fractals
edited Aug 22 '16 at 19:41
Kuifje
asked Aug 22 '16 at 19:36
KuifjeKuifje
7,2102726
7,2102726
$begingroup$
On behalf of @Andres Iglesias: Here, you can find the reference that answers your question at full extent: J.M. Gutiérrez, A. Iglesias, M.A. Rodríguez: "A multifractal analysis of IFSP invariant measures with application to fractal image generation". Fractals, vol. 4, Issue 1, pp. 17-27 (1996). Last sentence of the abstract reads: Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.
$endgroup$
– dantopa
Dec 8 '18 at 0:01
add a comment |
$begingroup$
On behalf of @Andres Iglesias: Here, you can find the reference that answers your question at full extent: J.M. Gutiérrez, A. Iglesias, M.A. Rodríguez: "A multifractal analysis of IFSP invariant measures with application to fractal image generation". Fractals, vol. 4, Issue 1, pp. 17-27 (1996). Last sentence of the abstract reads: Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.
$endgroup$
– dantopa
Dec 8 '18 at 0:01
$begingroup$
On behalf of @Andres Iglesias: Here, you can find the reference that answers your question at full extent: J.M. Gutiérrez, A. Iglesias, M.A. Rodríguez: "A multifractal analysis of IFSP invariant measures with application to fractal image generation". Fractals, vol. 4, Issue 1, pp. 17-27 (1996). Last sentence of the abstract reads: Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.
$endgroup$
– dantopa
Dec 8 '18 at 0:01
$begingroup$
On behalf of @Andres Iglesias: Here, you can find the reference that answers your question at full extent: J.M. Gutiérrez, A. Iglesias, M.A. Rodríguez: "A multifractal analysis of IFSP invariant measures with application to fractal image generation". Fractals, vol. 4, Issue 1, pp. 17-27 (1996). Last sentence of the abstract reads: Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.
$endgroup$
– dantopa
Dec 8 '18 at 0:01
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
I don't think this determinant based formula is really the best way to pick the probabilities. As a concrete example, let's consider the IFS with functions
begin{align}
f_1(vec{x}) &= 0.98;R(137.5^{circ});vec{x} \
f_2(vec{x}) &= 0.1,vec{x}+langle 1,0 rangle.
end{align}
Note that $R(theta)$ represents a rotation matrix through the angle $theta$ so that $f_1$ represents a rotation and slight contraction centered at the origin while $f_2$ represents a strong contraction and shift to right. The result, as we'll see, is a nice self-similar spiral.
Now, the determinants are $0.98^2=0.9604$ and $0.1^2=0.01$ and their sum is $0.9704$. For the probabilities, this yields
begin{align}
p_1 &= 0.9604/0.9704 approx 0.989695 \
p_2 &= 0.01/0.9704 approx 0.010305
end{align}
If we apply the Random IFS Algorithm to generate 30000 points approximating the attractor using these choices of probabilities, we get the following image:
Definitely, much better than choosing equal probabilities (which yields this image) but it now seems to be too heavily weighted towards the center and too light on the edges.
A better approach accounts for the fractal dimension of the object. If you have an IFS that satisfies the open set condition and consists of similarities with ratios ${r_1,r_2,ldots,r_m}$, then the similarity dimension of the object is the unique number $s$ such that
$$r_1^s + r_2^s +cdots+ r_m^s = 1.$$
As a result, ${r_1^s,r_2^s,ldots,r_m^s}$ is a good probability list and, in fact, is the correct choice of probabilities if you want a uniform distribution of points throughout the attractor. In the example above with $r_1=0.98$ and $r_2=0.1$, we get $sapprox 1.51953$ and
begin{align}
p_1 &= r_1^s approx 0.969768 \
p_2 &= r_2^s approx 0.0302322
end{align}
Note that this scheme weights $f_2$ about three times heavier than the determinant based scheme so that we expect to trace the outer portion out more. The resulting picture looks much better:
The reason this works is rooted in the proof that the similarity dimension agrees with the Hausdorff dimension. Central to that proof is the construction of a self-similar measure on the attractor that is uniformly distributed in terms of density and it is exactly this choice of probability weights that work.
$endgroup$
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
2
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1900337%2fhow-to-optimally-adjust-the-probabilities-for-the-random-ifs-algorithm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I don't think this determinant based formula is really the best way to pick the probabilities. As a concrete example, let's consider the IFS with functions
begin{align}
f_1(vec{x}) &= 0.98;R(137.5^{circ});vec{x} \
f_2(vec{x}) &= 0.1,vec{x}+langle 1,0 rangle.
end{align}
Note that $R(theta)$ represents a rotation matrix through the angle $theta$ so that $f_1$ represents a rotation and slight contraction centered at the origin while $f_2$ represents a strong contraction and shift to right. The result, as we'll see, is a nice self-similar spiral.
Now, the determinants are $0.98^2=0.9604$ and $0.1^2=0.01$ and their sum is $0.9704$. For the probabilities, this yields
begin{align}
p_1 &= 0.9604/0.9704 approx 0.989695 \
p_2 &= 0.01/0.9704 approx 0.010305
end{align}
If we apply the Random IFS Algorithm to generate 30000 points approximating the attractor using these choices of probabilities, we get the following image:
Definitely, much better than choosing equal probabilities (which yields this image) but it now seems to be too heavily weighted towards the center and too light on the edges.
A better approach accounts for the fractal dimension of the object. If you have an IFS that satisfies the open set condition and consists of similarities with ratios ${r_1,r_2,ldots,r_m}$, then the similarity dimension of the object is the unique number $s$ such that
$$r_1^s + r_2^s +cdots+ r_m^s = 1.$$
As a result, ${r_1^s,r_2^s,ldots,r_m^s}$ is a good probability list and, in fact, is the correct choice of probabilities if you want a uniform distribution of points throughout the attractor. In the example above with $r_1=0.98$ and $r_2=0.1$, we get $sapprox 1.51953$ and
begin{align}
p_1 &= r_1^s approx 0.969768 \
p_2 &= r_2^s approx 0.0302322
end{align}
Note that this scheme weights $f_2$ about three times heavier than the determinant based scheme so that we expect to trace the outer portion out more. The resulting picture looks much better:
The reason this works is rooted in the proof that the similarity dimension agrees with the Hausdorff dimension. Central to that proof is the construction of a self-similar measure on the attractor that is uniformly distributed in terms of density and it is exactly this choice of probability weights that work.
$endgroup$
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
2
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
add a comment |
$begingroup$
I don't think this determinant based formula is really the best way to pick the probabilities. As a concrete example, let's consider the IFS with functions
begin{align}
f_1(vec{x}) &= 0.98;R(137.5^{circ});vec{x} \
f_2(vec{x}) &= 0.1,vec{x}+langle 1,0 rangle.
end{align}
Note that $R(theta)$ represents a rotation matrix through the angle $theta$ so that $f_1$ represents a rotation and slight contraction centered at the origin while $f_2$ represents a strong contraction and shift to right. The result, as we'll see, is a nice self-similar spiral.
Now, the determinants are $0.98^2=0.9604$ and $0.1^2=0.01$ and their sum is $0.9704$. For the probabilities, this yields
begin{align}
p_1 &= 0.9604/0.9704 approx 0.989695 \
p_2 &= 0.01/0.9704 approx 0.010305
end{align}
If we apply the Random IFS Algorithm to generate 30000 points approximating the attractor using these choices of probabilities, we get the following image:
Definitely, much better than choosing equal probabilities (which yields this image) but it now seems to be too heavily weighted towards the center and too light on the edges.
A better approach accounts for the fractal dimension of the object. If you have an IFS that satisfies the open set condition and consists of similarities with ratios ${r_1,r_2,ldots,r_m}$, then the similarity dimension of the object is the unique number $s$ such that
$$r_1^s + r_2^s +cdots+ r_m^s = 1.$$
As a result, ${r_1^s,r_2^s,ldots,r_m^s}$ is a good probability list and, in fact, is the correct choice of probabilities if you want a uniform distribution of points throughout the attractor. In the example above with $r_1=0.98$ and $r_2=0.1$, we get $sapprox 1.51953$ and
begin{align}
p_1 &= r_1^s approx 0.969768 \
p_2 &= r_2^s approx 0.0302322
end{align}
Note that this scheme weights $f_2$ about three times heavier than the determinant based scheme so that we expect to trace the outer portion out more. The resulting picture looks much better:
The reason this works is rooted in the proof that the similarity dimension agrees with the Hausdorff dimension. Central to that proof is the construction of a self-similar measure on the attractor that is uniformly distributed in terms of density and it is exactly this choice of probability weights that work.
$endgroup$
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
2
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
add a comment |
$begingroup$
I don't think this determinant based formula is really the best way to pick the probabilities. As a concrete example, let's consider the IFS with functions
begin{align}
f_1(vec{x}) &= 0.98;R(137.5^{circ});vec{x} \
f_2(vec{x}) &= 0.1,vec{x}+langle 1,0 rangle.
end{align}
Note that $R(theta)$ represents a rotation matrix through the angle $theta$ so that $f_1$ represents a rotation and slight contraction centered at the origin while $f_2$ represents a strong contraction and shift to right. The result, as we'll see, is a nice self-similar spiral.
Now, the determinants are $0.98^2=0.9604$ and $0.1^2=0.01$ and their sum is $0.9704$. For the probabilities, this yields
begin{align}
p_1 &= 0.9604/0.9704 approx 0.989695 \
p_2 &= 0.01/0.9704 approx 0.010305
end{align}
If we apply the Random IFS Algorithm to generate 30000 points approximating the attractor using these choices of probabilities, we get the following image:
Definitely, much better than choosing equal probabilities (which yields this image) but it now seems to be too heavily weighted towards the center and too light on the edges.
A better approach accounts for the fractal dimension of the object. If you have an IFS that satisfies the open set condition and consists of similarities with ratios ${r_1,r_2,ldots,r_m}$, then the similarity dimension of the object is the unique number $s$ such that
$$r_1^s + r_2^s +cdots+ r_m^s = 1.$$
As a result, ${r_1^s,r_2^s,ldots,r_m^s}$ is a good probability list and, in fact, is the correct choice of probabilities if you want a uniform distribution of points throughout the attractor. In the example above with $r_1=0.98$ and $r_2=0.1$, we get $sapprox 1.51953$ and
begin{align}
p_1 &= r_1^s approx 0.969768 \
p_2 &= r_2^s approx 0.0302322
end{align}
Note that this scheme weights $f_2$ about three times heavier than the determinant based scheme so that we expect to trace the outer portion out more. The resulting picture looks much better:
The reason this works is rooted in the proof that the similarity dimension agrees with the Hausdorff dimension. Central to that proof is the construction of a self-similar measure on the attractor that is uniformly distributed in terms of density and it is exactly this choice of probability weights that work.
$endgroup$
I don't think this determinant based formula is really the best way to pick the probabilities. As a concrete example, let's consider the IFS with functions
begin{align}
f_1(vec{x}) &= 0.98;R(137.5^{circ});vec{x} \
f_2(vec{x}) &= 0.1,vec{x}+langle 1,0 rangle.
end{align}
Note that $R(theta)$ represents a rotation matrix through the angle $theta$ so that $f_1$ represents a rotation and slight contraction centered at the origin while $f_2$ represents a strong contraction and shift to right. The result, as we'll see, is a nice self-similar spiral.
Now, the determinants are $0.98^2=0.9604$ and $0.1^2=0.01$ and their sum is $0.9704$. For the probabilities, this yields
begin{align}
p_1 &= 0.9604/0.9704 approx 0.989695 \
p_2 &= 0.01/0.9704 approx 0.010305
end{align}
If we apply the Random IFS Algorithm to generate 30000 points approximating the attractor using these choices of probabilities, we get the following image:
Definitely, much better than choosing equal probabilities (which yields this image) but it now seems to be too heavily weighted towards the center and too light on the edges.
A better approach accounts for the fractal dimension of the object. If you have an IFS that satisfies the open set condition and consists of similarities with ratios ${r_1,r_2,ldots,r_m}$, then the similarity dimension of the object is the unique number $s$ such that
$$r_1^s + r_2^s +cdots+ r_m^s = 1.$$
As a result, ${r_1^s,r_2^s,ldots,r_m^s}$ is a good probability list and, in fact, is the correct choice of probabilities if you want a uniform distribution of points throughout the attractor. In the example above with $r_1=0.98$ and $r_2=0.1$, we get $sapprox 1.51953$ and
begin{align}
p_1 &= r_1^s approx 0.969768 \
p_2 &= r_2^s approx 0.0302322
end{align}
Note that this scheme weights $f_2$ about three times heavier than the determinant based scheme so that we expect to trace the outer portion out more. The resulting picture looks much better:
The reason this works is rooted in the proof that the similarity dimension agrees with the Hausdorff dimension. Central to that proof is the construction of a self-similar measure on the attractor that is uniformly distributed in terms of density and it is exactly this choice of probability weights that work.
answered Aug 22 '16 at 20:09
Mark McClureMark McClure
23.6k34471
23.6k34471
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
2
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
add a comment |
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
2
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Fascinating! I love the fact that it uses another characteristic of the fractal (its dimension). Thanks for the detailed answer.
$endgroup$
– Kuifje
Aug 23 '16 at 0:30
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
$begingroup$
Would you mind giving a reference for this please?
$endgroup$
– Kuifje
Aug 23 '16 at 1:21
2
2
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
$begingroup$
How about theorem 9.3 of the first edition of Falconer's Fractal Geometry or theorem 6.5.4 of the second edition of Edgar's Measure, Topology, and Fractal Geometry. I don't think that either of these address the specific question of the choice of the probabilities in this algorithm but they do discuss the construction of a well distributed measure on the attractor, which is really the essential issue.
$endgroup$
– Mark McClure
Aug 23 '16 at 2:11
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1900337%2fhow-to-optimally-adjust-the-probabilities-for-the-random-ifs-algorithm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
On behalf of @Andres Iglesias: Here, you can find the reference that answers your question at full extent: J.M. Gutiérrez, A. Iglesias, M.A. Rodríguez: "A multifractal analysis of IFSP invariant measures with application to fractal image generation". Fractals, vol. 4, Issue 1, pp. 17-27 (1996). Last sentence of the abstract reads: Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.
$endgroup$
– dantopa
Dec 8 '18 at 0:01