Machine learning model to predict the best candidate
$begingroup$
Problem: I would like to build a machine learning model that can predict the best candidate from any given set. What could be a good architecture for such a model?
Given: I have several training examples, each of which consists of:
- a set of candidates.
- a descriptor for the set as a whole.
- a label that tells which one of those candidates is the best in that set.
Details:
- The number of candidates in every set may be different.
- Every set is unordered.
- The descriptor of each set is currently a fixed length one-hot vector. I'm open to add more features to it though.
- Each candidate is represented by a fixed length feature vector. (However in future, the number of features describing each candidate may also differ for every candidate).
What I tried but didn't work:
One approach I tried was a simple MLP that takes one candidate as input and outputs whether or not the candidate is the best. But since this MLP wouldn't know which set the candidate belongs to, it fails in situations where a candidate is the best in one set but the same candidate is not the best in another set.
To get into some more specifics, in my current problem, each candidate is a 2D polygon with a fixed number of line segments. Labelling on the training examples is being done manually to pick the most good looking polygon in a given set of polylines. Each polygon is described by an array of (x,y) coordinates.
One problem I face is that I don't have a natural starting point for a polygon to begin it's array of (x,y) coordinates from. Currently I'm choosing the starting point to be the one with the minimum value of x+y and going counterclockwise from there.
Currently each 2D polygon has the same number of segments. But I would soon need to support polygons with varying number of segments.
In future, I would like to extend this ML model to 3D polyhedrons too, but I don't know how to even build a feature vector to describe for 3D polyhedron yet. I guess that's a problem for another day.
machine-learning neural-network prediction machine-learning-model
New contributor
$endgroup$
|
show 3 more comments
$begingroup$
Problem: I would like to build a machine learning model that can predict the best candidate from any given set. What could be a good architecture for such a model?
Given: I have several training examples, each of which consists of:
- a set of candidates.
- a descriptor for the set as a whole.
- a label that tells which one of those candidates is the best in that set.
Details:
- The number of candidates in every set may be different.
- Every set is unordered.
- The descriptor of each set is currently a fixed length one-hot vector. I'm open to add more features to it though.
- Each candidate is represented by a fixed length feature vector. (However in future, the number of features describing each candidate may also differ for every candidate).
What I tried but didn't work:
One approach I tried was a simple MLP that takes one candidate as input and outputs whether or not the candidate is the best. But since this MLP wouldn't know which set the candidate belongs to, it fails in situations where a candidate is the best in one set but the same candidate is not the best in another set.
To get into some more specifics, in my current problem, each candidate is a 2D polygon with a fixed number of line segments. Labelling on the training examples is being done manually to pick the most good looking polygon in a given set of polylines. Each polygon is described by an array of (x,y) coordinates.
One problem I face is that I don't have a natural starting point for a polygon to begin it's array of (x,y) coordinates from. Currently I'm choosing the starting point to be the one with the minimum value of x+y and going counterclockwise from there.
Currently each 2D polygon has the same number of segments. But I would soon need to support polygons with varying number of segments.
In future, I would like to extend this ML model to 3D polyhedrons too, but I don't know how to even build a feature vector to describe for 3D polyhedron yet. I guess that's a problem for another day.
machine-learning neural-network prediction machine-learning-model
New contributor
$endgroup$
$begingroup$
Do you have labeled comparisons between polygons across sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Nope. I do not have any comparisons across sets.
$endgroup$
– mak
yesterday
$begingroup$
That makes it a bit hard. Key here is to be able to formulate the problem as a standard type of ML problem. You can have a look at the Ranking via Pairwise Comparisons for some inspiration, but I'm not sure if it fit entirely...
$endgroup$
– jonnor
yesterday
$begingroup$
Thanks a lot for your suggestions! Even I had considered pairwise comparisons and I guess they might work, but the performance would go O(n^2). I also considered RNNs but they are meant for ordered sequences, not for unordered sets.
$endgroup$
– mak
yesterday
$begingroup$
How many polygons in each set, and how many sets?
$endgroup$
– jonnor
yesterday
|
show 3 more comments
$begingroup$
Problem: I would like to build a machine learning model that can predict the best candidate from any given set. What could be a good architecture for such a model?
Given: I have several training examples, each of which consists of:
- a set of candidates.
- a descriptor for the set as a whole.
- a label that tells which one of those candidates is the best in that set.
Details:
- The number of candidates in every set may be different.
- Every set is unordered.
- The descriptor of each set is currently a fixed length one-hot vector. I'm open to add more features to it though.
- Each candidate is represented by a fixed length feature vector. (However in future, the number of features describing each candidate may also differ for every candidate).
What I tried but didn't work:
One approach I tried was a simple MLP that takes one candidate as input and outputs whether or not the candidate is the best. But since this MLP wouldn't know which set the candidate belongs to, it fails in situations where a candidate is the best in one set but the same candidate is not the best in another set.
To get into some more specifics, in my current problem, each candidate is a 2D polygon with a fixed number of line segments. Labelling on the training examples is being done manually to pick the most good looking polygon in a given set of polylines. Each polygon is described by an array of (x,y) coordinates.
One problem I face is that I don't have a natural starting point for a polygon to begin it's array of (x,y) coordinates from. Currently I'm choosing the starting point to be the one with the minimum value of x+y and going counterclockwise from there.
Currently each 2D polygon has the same number of segments. But I would soon need to support polygons with varying number of segments.
In future, I would like to extend this ML model to 3D polyhedrons too, but I don't know how to even build a feature vector to describe for 3D polyhedron yet. I guess that's a problem for another day.
machine-learning neural-network prediction machine-learning-model
New contributor
$endgroup$
Problem: I would like to build a machine learning model that can predict the best candidate from any given set. What could be a good architecture for such a model?
Given: I have several training examples, each of which consists of:
- a set of candidates.
- a descriptor for the set as a whole.
- a label that tells which one of those candidates is the best in that set.
Details:
- The number of candidates in every set may be different.
- Every set is unordered.
- The descriptor of each set is currently a fixed length one-hot vector. I'm open to add more features to it though.
- Each candidate is represented by a fixed length feature vector. (However in future, the number of features describing each candidate may also differ for every candidate).
What I tried but didn't work:
One approach I tried was a simple MLP that takes one candidate as input and outputs whether or not the candidate is the best. But since this MLP wouldn't know which set the candidate belongs to, it fails in situations where a candidate is the best in one set but the same candidate is not the best in another set.
To get into some more specifics, in my current problem, each candidate is a 2D polygon with a fixed number of line segments. Labelling on the training examples is being done manually to pick the most good looking polygon in a given set of polylines. Each polygon is described by an array of (x,y) coordinates.
One problem I face is that I don't have a natural starting point for a polygon to begin it's array of (x,y) coordinates from. Currently I'm choosing the starting point to be the one with the minimum value of x+y and going counterclockwise from there.
Currently each 2D polygon has the same number of segments. But I would soon need to support polygons with varying number of segments.
In future, I would like to extend this ML model to 3D polyhedrons too, but I don't know how to even build a feature vector to describe for 3D polyhedron yet. I guess that's a problem for another day.
machine-learning neural-network prediction machine-learning-model
machine-learning neural-network prediction machine-learning-model
New contributor
New contributor
edited yesterday
mak
New contributor
asked yesterday
makmak
12
12
New contributor
New contributor
$begingroup$
Do you have labeled comparisons between polygons across sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Nope. I do not have any comparisons across sets.
$endgroup$
– mak
yesterday
$begingroup$
That makes it a bit hard. Key here is to be able to formulate the problem as a standard type of ML problem. You can have a look at the Ranking via Pairwise Comparisons for some inspiration, but I'm not sure if it fit entirely...
$endgroup$
– jonnor
yesterday
$begingroup$
Thanks a lot for your suggestions! Even I had considered pairwise comparisons and I guess they might work, but the performance would go O(n^2). I also considered RNNs but they are meant for ordered sequences, not for unordered sets.
$endgroup$
– mak
yesterday
$begingroup$
How many polygons in each set, and how many sets?
$endgroup$
– jonnor
yesterday
|
show 3 more comments
$begingroup$
Do you have labeled comparisons between polygons across sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Nope. I do not have any comparisons across sets.
$endgroup$
– mak
yesterday
$begingroup$
That makes it a bit hard. Key here is to be able to formulate the problem as a standard type of ML problem. You can have a look at the Ranking via Pairwise Comparisons for some inspiration, but I'm not sure if it fit entirely...
$endgroup$
– jonnor
yesterday
$begingroup$
Thanks a lot for your suggestions! Even I had considered pairwise comparisons and I guess they might work, but the performance would go O(n^2). I also considered RNNs but they are meant for ordered sequences, not for unordered sets.
$endgroup$
– mak
yesterday
$begingroup$
How many polygons in each set, and how many sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Do you have labeled comparisons between polygons across sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Do you have labeled comparisons between polygons across sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Nope. I do not have any comparisons across sets.
$endgroup$
– mak
yesterday
$begingroup$
Nope. I do not have any comparisons across sets.
$endgroup$
– mak
yesterday
$begingroup$
That makes it a bit hard. Key here is to be able to formulate the problem as a standard type of ML problem. You can have a look at the Ranking via Pairwise Comparisons for some inspiration, but I'm not sure if it fit entirely...
$endgroup$
– jonnor
yesterday
$begingroup$
That makes it a bit hard. Key here is to be able to formulate the problem as a standard type of ML problem. You can have a look at the Ranking via Pairwise Comparisons for some inspiration, but I'm not sure if it fit entirely...
$endgroup$
– jonnor
yesterday
$begingroup$
Thanks a lot for your suggestions! Even I had considered pairwise comparisons and I guess they might work, but the performance would go O(n^2). I also considered RNNs but they are meant for ordered sequences, not for unordered sets.
$endgroup$
– mak
yesterday
$begingroup$
Thanks a lot for your suggestions! Even I had considered pairwise comparisons and I guess they might work, but the performance would go O(n^2). I also considered RNNs but they are meant for ordered sequences, not for unordered sets.
$endgroup$
– mak
yesterday
$begingroup$
How many polygons in each set, and how many sets?
$endgroup$
– jonnor
yesterday
$begingroup$
How many polygons in each set, and how many sets?
$endgroup$
– jonnor
yesterday
|
show 3 more comments
1 Answer
1
active
oldest
votes
$begingroup$
I think it would make more sense to train a model to grade (regression) each candidate, them from candidates of a particular set you can use the best candidate from its "grade".
Also, you should try changing the information from raw cloud of points to more meaningful geometric form irfomation:
- Number of vertices/segments
- Segments length mean and variance
- Skewness
- Size and direction of major and minor axis
- Center position
- Moments of area (first,second,third...)
Update
To apply a regression model you will need to generate grades for the training set and that might be a challenge. For that I will propose a few heuristics to generate this:
Since you have about 10k samples, you could assign to every candidate the probability of been the best candidate in any set (for example, if he is the best candidate in 10 sets you can give him a grade $frac{10}{10,000}$.
You could try clustering the samples and assigning a grade to every cluster as the probability of a candidate in that cluster been the best candidate by $frac{N_{best}}{N_{cluster}}$, where $N_{best}$ is the number of best candidates in any set in that cluster and $N_{cluster}$ is the number of candidates in that cluster
You can assign to each best candidate a grade like $1$ if he is best candidate in every set it appears and $1-e^{-alpha N}$ for every $N$ times it appears in a set without been the best candidate. You will have to tune the decay rate $alpha$ like any hyperparameter.
$endgroup$
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
mak is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48784%2fmachine-learning-model-to-predict-the-best-candidate%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think it would make more sense to train a model to grade (regression) each candidate, them from candidates of a particular set you can use the best candidate from its "grade".
Also, you should try changing the information from raw cloud of points to more meaningful geometric form irfomation:
- Number of vertices/segments
- Segments length mean and variance
- Skewness
- Size and direction of major and minor axis
- Center position
- Moments of area (first,second,third...)
Update
To apply a regression model you will need to generate grades for the training set and that might be a challenge. For that I will propose a few heuristics to generate this:
Since you have about 10k samples, you could assign to every candidate the probability of been the best candidate in any set (for example, if he is the best candidate in 10 sets you can give him a grade $frac{10}{10,000}$.
You could try clustering the samples and assigning a grade to every cluster as the probability of a candidate in that cluster been the best candidate by $frac{N_{best}}{N_{cluster}}$, where $N_{best}$ is the number of best candidates in any set in that cluster and $N_{cluster}$ is the number of candidates in that cluster
You can assign to each best candidate a grade like $1$ if he is best candidate in every set it appears and $1-e^{-alpha N}$ for every $N$ times it appears in a set without been the best candidate. You will have to tune the decay rate $alpha$ like any hyperparameter.
$endgroup$
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
add a comment |
$begingroup$
I think it would make more sense to train a model to grade (regression) each candidate, them from candidates of a particular set you can use the best candidate from its "grade".
Also, you should try changing the information from raw cloud of points to more meaningful geometric form irfomation:
- Number of vertices/segments
- Segments length mean and variance
- Skewness
- Size and direction of major and minor axis
- Center position
- Moments of area (first,second,third...)
Update
To apply a regression model you will need to generate grades for the training set and that might be a challenge. For that I will propose a few heuristics to generate this:
Since you have about 10k samples, you could assign to every candidate the probability of been the best candidate in any set (for example, if he is the best candidate in 10 sets you can give him a grade $frac{10}{10,000}$.
You could try clustering the samples and assigning a grade to every cluster as the probability of a candidate in that cluster been the best candidate by $frac{N_{best}}{N_{cluster}}$, where $N_{best}$ is the number of best candidates in any set in that cluster and $N_{cluster}$ is the number of candidates in that cluster
You can assign to each best candidate a grade like $1$ if he is best candidate in every set it appears and $1-e^{-alpha N}$ for every $N$ times it appears in a set without been the best candidate. You will have to tune the decay rate $alpha$ like any hyperparameter.
$endgroup$
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
add a comment |
$begingroup$
I think it would make more sense to train a model to grade (regression) each candidate, them from candidates of a particular set you can use the best candidate from its "grade".
Also, you should try changing the information from raw cloud of points to more meaningful geometric form irfomation:
- Number of vertices/segments
- Segments length mean and variance
- Skewness
- Size and direction of major and minor axis
- Center position
- Moments of area (first,second,third...)
Update
To apply a regression model you will need to generate grades for the training set and that might be a challenge. For that I will propose a few heuristics to generate this:
Since you have about 10k samples, you could assign to every candidate the probability of been the best candidate in any set (for example, if he is the best candidate in 10 sets you can give him a grade $frac{10}{10,000}$.
You could try clustering the samples and assigning a grade to every cluster as the probability of a candidate in that cluster been the best candidate by $frac{N_{best}}{N_{cluster}}$, where $N_{best}$ is the number of best candidates in any set in that cluster and $N_{cluster}$ is the number of candidates in that cluster
You can assign to each best candidate a grade like $1$ if he is best candidate in every set it appears and $1-e^{-alpha N}$ for every $N$ times it appears in a set without been the best candidate. You will have to tune the decay rate $alpha$ like any hyperparameter.
$endgroup$
I think it would make more sense to train a model to grade (regression) each candidate, them from candidates of a particular set you can use the best candidate from its "grade".
Also, you should try changing the information from raw cloud of points to more meaningful geometric form irfomation:
- Number of vertices/segments
- Segments length mean and variance
- Skewness
- Size and direction of major and minor axis
- Center position
- Moments of area (first,second,third...)
Update
To apply a regression model you will need to generate grades for the training set and that might be a challenge. For that I will propose a few heuristics to generate this:
Since you have about 10k samples, you could assign to every candidate the probability of been the best candidate in any set (for example, if he is the best candidate in 10 sets you can give him a grade $frac{10}{10,000}$.
You could try clustering the samples and assigning a grade to every cluster as the probability of a candidate in that cluster been the best candidate by $frac{N_{best}}{N_{cluster}}$, where $N_{best}$ is the number of best candidates in any set in that cluster and $N_{cluster}$ is the number of candidates in that cluster
You can assign to each best candidate a grade like $1$ if he is best candidate in every set it appears and $1-e^{-alpha N}$ for every $N$ times it appears in a set without been the best candidate. You will have to tune the decay rate $alpha$ like any hyperparameter.
edited 15 hours ago
answered yesterday
Pedro Henrique MonfortePedro Henrique Monforte
3379
3379
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
add a comment |
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
Thanks @Pedro for your useful answer! However, in order to create a regression model, I would need to train it with grades for each candidate in the training examples. How do you suggest I generate those grades?
$endgroup$
– mak
21 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
True, I proposed a few heuristics to that and updates the answer. Sorry for forgetting that crucial point lol
$endgroup$
– Pedro Henrique Monforte
15 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
$begingroup$
Could you return to us with a small report of the success of any of my tips? I am a computer vision researcher and geometry-related models are really useful to my field.
$endgroup$
– Pedro Henrique Monforte
12 hours ago
add a comment |
mak is a new contributor. Be nice, and check out our Code of Conduct.
mak is a new contributor. Be nice, and check out our Code of Conduct.
mak is a new contributor. Be nice, and check out our Code of Conduct.
mak is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48784%2fmachine-learning-model-to-predict-the-best-candidate%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Do you have labeled comparisons between polygons across sets?
$endgroup$
– jonnor
yesterday
$begingroup$
Nope. I do not have any comparisons across sets.
$endgroup$
– mak
yesterday
$begingroup$
That makes it a bit hard. Key here is to be able to formulate the problem as a standard type of ML problem. You can have a look at the Ranking via Pairwise Comparisons for some inspiration, but I'm not sure if it fit entirely...
$endgroup$
– jonnor
yesterday
$begingroup$
Thanks a lot for your suggestions! Even I had considered pairwise comparisons and I guess they might work, but the performance would go O(n^2). I also considered RNNs but they are meant for ordered sequences, not for unordered sets.
$endgroup$
– mak
yesterday
$begingroup$
How many polygons in each set, and how many sets?
$endgroup$
– jonnor
yesterday